00:00:00.000 Started by upstream project "autotest-per-patch" build number 127139 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.140 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.200 Using shallow fetch with depth 1 00:00:00.200 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.200 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.272 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.272 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.370 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.381 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.395 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.395 > git config core.sparsecheckout # timeout=10 00:00:05.408 > git read-tree -mu HEAD # timeout=10 00:00:05.427 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.460 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.461 > git rev-list --no-walk 86cd2acf6b4646bdb5ab15e0e320711d17ba4742 # timeout=10 00:00:05.572 [Pipeline] Start of Pipeline 00:00:05.584 [Pipeline] library 00:00:05.586 Loading library shm_lib@master 00:00:05.586 Library shm_lib@master is cached. Copying from home. 00:00:05.599 [Pipeline] node 00:00:20.602 Still waiting to schedule task 00:00:20.602 Waiting for next available executor on ‘vagrant-vm-host’ 00:01:46.873 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:46.875 [Pipeline] { 00:01:46.889 [Pipeline] catchError 00:01:46.891 [Pipeline] { 00:01:46.908 [Pipeline] wrap 00:01:46.919 [Pipeline] { 00:01:46.933 [Pipeline] stage 00:01:46.937 [Pipeline] { (Prologue) 00:01:46.961 [Pipeline] echo 00:01:46.963 Node: VM-host-WFP7 00:01:46.970 [Pipeline] cleanWs 00:01:46.981 [WS-CLEANUP] Deleting project workspace... 00:01:46.981 [WS-CLEANUP] Deferred wipeout is used... 00:01:46.987 [WS-CLEANUP] done 00:01:47.163 [Pipeline] setCustomBuildProperty 00:01:47.251 [Pipeline] httpRequest 00:01:47.269 [Pipeline] echo 00:01:47.270 Sorcerer 10.211.164.101 is alive 00:01:47.279 [Pipeline] httpRequest 00:01:47.285 HttpMethod: GET 00:01:47.285 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:01:47.287 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:01:47.290 Response Code: HTTP/1.1 200 OK 00:01:47.291 Success: Status code 200 is in the accepted range: 200,404 00:01:47.291 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:01:54.103 [Pipeline] sh 00:01:54.380 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:01:54.396 [Pipeline] httpRequest 00:01:54.415 [Pipeline] echo 00:01:54.417 Sorcerer 10.211.164.101 is alive 00:01:54.428 [Pipeline] httpRequest 00:01:54.432 HttpMethod: GET 00:01:54.433 URL: http://10.211.164.101/packages/spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:01:54.433 Sending request to url: http://10.211.164.101/packages/spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:01:54.438 Response Code: HTTP/1.1 200 OK 00:01:54.438 Success: Status code 200 is in the accepted range: 200,404 00:01:54.439 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:02:18.392 [Pipeline] sh 00:02:18.675 + tar --no-same-owner -xf spdk_c0d54772e8d46e080f95fc5b44563b03791fcccd.tar.gz 00:02:21.966 [Pipeline] sh 00:02:22.246 + git -C spdk log --oneline -n5 00:02:22.246 c0d54772e test/common: Include test/nvme in the reap_spdk_processes() lookup 00:02:22.246 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:02:22.246 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:02:22.246 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:02:22.246 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:02:22.264 [Pipeline] writeFile 00:02:22.280 [Pipeline] sh 00:02:22.558 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:22.569 [Pipeline] sh 00:02:22.847 + cat autorun-spdk.conf 00:02:22.847 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.847 SPDK_TEST_NVMF=1 00:02:22.847 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.847 SPDK_TEST_USDT=1 00:02:22.847 SPDK_TEST_NVMF_MDNS=1 00:02:22.847 SPDK_RUN_UBSAN=1 00:02:22.847 NET_TYPE=virt 00:02:22.847 SPDK_JSONRPC_GO_CLIENT=1 00:02:22.847 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.853 RUN_NIGHTLY=0 00:02:22.855 [Pipeline] } 00:02:22.873 [Pipeline] // stage 00:02:22.886 [Pipeline] stage 00:02:22.888 [Pipeline] { (Run VM) 00:02:22.900 [Pipeline] sh 00:02:23.176 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:23.176 + echo 'Start stage prepare_nvme.sh' 00:02:23.176 Start stage prepare_nvme.sh 00:02:23.176 + [[ -n 5 ]] 00:02:23.176 + disk_prefix=ex5 00:02:23.176 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:02:23.176 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:02:23.176 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:02:23.176 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.176 ++ SPDK_TEST_NVMF=1 00:02:23.176 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.176 ++ SPDK_TEST_USDT=1 00:02:23.176 ++ SPDK_TEST_NVMF_MDNS=1 00:02:23.176 ++ SPDK_RUN_UBSAN=1 00:02:23.176 ++ NET_TYPE=virt 00:02:23.176 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:23.176 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.176 ++ RUN_NIGHTLY=0 00:02:23.176 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:23.176 + nvme_files=() 00:02:23.176 + declare -A nvme_files 00:02:23.176 + backend_dir=/var/lib/libvirt/images/backends 00:02:23.176 + nvme_files['nvme.img']=5G 00:02:23.176 + nvme_files['nvme-cmb.img']=5G 00:02:23.176 + nvme_files['nvme-multi0.img']=4G 00:02:23.176 + nvme_files['nvme-multi1.img']=4G 00:02:23.176 + nvme_files['nvme-multi2.img']=4G 00:02:23.176 + nvme_files['nvme-openstack.img']=8G 00:02:23.176 + nvme_files['nvme-zns.img']=5G 00:02:23.176 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:23.176 + (( SPDK_TEST_FTL == 1 )) 00:02:23.176 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:23.176 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:23.176 + for nvme in "${!nvme_files[@]}" 00:02:23.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:23.176 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.176 + for nvme in "${!nvme_files[@]}" 00:02:23.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:23.176 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.176 + for nvme in "${!nvme_files[@]}" 00:02:23.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:23.176 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:23.176 + for nvme in "${!nvme_files[@]}" 00:02:23.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:23.176 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.176 + for nvme in "${!nvme_files[@]}" 00:02:23.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:23.176 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.176 + for nvme in "${!nvme_files[@]}" 00:02:23.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:23.176 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.176 + for nvme in "${!nvme_files[@]}" 00:02:23.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:23.435 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.435 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:23.435 + echo 'End stage prepare_nvme.sh' 00:02:23.435 End stage prepare_nvme.sh 00:02:23.447 [Pipeline] sh 00:02:23.729 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:23.729 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:02:23.729 00:02:23.729 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:02:23.729 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:02:23.729 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:23.729 HELP=0 00:02:23.729 DRY_RUN=0 00:02:23.729 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:02:23.729 NVME_DISKS_TYPE=nvme,nvme, 00:02:23.729 NVME_AUTO_CREATE=0 00:02:23.729 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:02:23.729 NVME_CMB=,, 00:02:23.729 NVME_PMR=,, 00:02:23.729 NVME_ZNS=,, 00:02:23.729 NVME_MS=,, 00:02:23.729 NVME_FDP=,, 00:02:23.729 SPDK_VAGRANT_DISTRO=fedora38 00:02:23.729 SPDK_VAGRANT_VMCPU=10 00:02:23.729 SPDK_VAGRANT_VMRAM=12288 00:02:23.729 SPDK_VAGRANT_PROVIDER=libvirt 00:02:23.729 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:23.729 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:23.729 SPDK_OPENSTACK_NETWORK=0 00:02:23.729 VAGRANT_PACKAGE_BOX=0 00:02:23.729 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:23.729 FORCE_DISTRO=true 00:02:23.729 VAGRANT_BOX_VERSION= 00:02:23.729 EXTRA_VAGRANTFILES= 00:02:23.729 NIC_MODEL=virtio 00:02:23.729 00:02:23.729 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:02:23.729 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:27.021 Bringing machine 'default' up with 'libvirt' provider... 00:02:27.279 ==> default: Creating image (snapshot of base box volume). 00:02:27.538 ==> default: Creating domain with the following settings... 00:02:27.538 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721891819_d4b82ca99a1ab726aef1 00:02:27.538 ==> default: -- Domain type: kvm 00:02:27.538 ==> default: -- Cpus: 10 00:02:27.538 ==> default: -- Feature: acpi 00:02:27.538 ==> default: -- Feature: apic 00:02:27.538 ==> default: -- Feature: pae 00:02:27.538 ==> default: -- Memory: 12288M 00:02:27.538 ==> default: -- Memory Backing: hugepages: 00:02:27.538 ==> default: -- Management MAC: 00:02:27.538 ==> default: -- Loader: 00:02:27.538 ==> default: -- Nvram: 00:02:27.538 ==> default: -- Base box: spdk/fedora38 00:02:27.538 ==> default: -- Storage pool: default 00:02:27.538 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721891819_d4b82ca99a1ab726aef1.img (20G) 00:02:27.538 ==> default: -- Volume Cache: default 00:02:27.538 ==> default: -- Kernel: 00:02:27.538 ==> default: -- Initrd: 00:02:27.538 ==> default: -- Graphics Type: vnc 00:02:27.538 ==> default: -- Graphics Port: -1 00:02:27.538 ==> default: -- Graphics IP: 127.0.0.1 00:02:27.538 ==> default: -- Graphics Password: Not defined 00:02:27.538 ==> default: -- Video Type: cirrus 00:02:27.538 ==> default: -- Video VRAM: 9216 00:02:27.538 ==> default: -- Sound Type: 00:02:27.538 ==> default: -- Keymap: en-us 00:02:27.538 ==> default: -- TPM Path: 00:02:27.538 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:27.538 ==> default: -- Command line args: 00:02:27.538 ==> default: -> value=-device, 00:02:27.538 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:27.538 ==> default: -> value=-drive, 00:02:27.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:02:27.538 ==> default: -> value=-device, 00:02:27.538 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:27.538 ==> default: -> value=-device, 00:02:27.538 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:27.538 ==> default: -> value=-drive, 00:02:27.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:27.538 ==> default: -> value=-device, 00:02:27.538 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:27.538 ==> default: -> value=-drive, 00:02:27.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:27.538 ==> default: -> value=-device, 00:02:27.538 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:27.538 ==> default: -> value=-drive, 00:02:27.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:27.538 ==> default: -> value=-device, 00:02:27.538 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:27.538 ==> default: Creating shared folders metadata... 00:02:27.797 ==> default: Starting domain. 00:02:29.177 ==> default: Waiting for domain to get an IP address... 00:02:44.043 ==> default: Waiting for SSH to become available... 00:02:44.986 ==> default: Configuring and enabling network interfaces... 00:02:50.314 default: SSH address: 192.168.121.230:22 00:02:50.314 default: SSH username: vagrant 00:02:50.314 default: SSH auth method: private key 00:02:52.220 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:00.335 ==> default: Mounting SSHFS shared folder... 00:03:01.720 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:01.720 ==> default: Checking Mount.. 00:03:03.097 ==> default: Folder Successfully Mounted! 00:03:03.098 ==> default: Running provisioner: file... 00:03:03.663 default: ~/.gitconfig => .gitconfig 00:03:04.231 00:03:04.231 SUCCESS! 00:03:04.231 00:03:04.231 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:03:04.231 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:04.231 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:03:04.231 00:03:04.240 [Pipeline] } 00:03:04.259 [Pipeline] // stage 00:03:04.269 [Pipeline] dir 00:03:04.270 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:03:04.272 [Pipeline] { 00:03:04.287 [Pipeline] catchError 00:03:04.289 [Pipeline] { 00:03:04.302 [Pipeline] sh 00:03:04.578 + vagrant ssh-config --host vagrant 00:03:04.578 + sed -ne /^Host/,$p 00:03:04.578 + tee ssh_conf 00:03:08.802 Host vagrant 00:03:08.802 HostName 192.168.121.230 00:03:08.802 User vagrant 00:03:08.802 Port 22 00:03:08.802 UserKnownHostsFile /dev/null 00:03:08.802 StrictHostKeyChecking no 00:03:08.802 PasswordAuthentication no 00:03:08.802 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:03:08.802 IdentitiesOnly yes 00:03:08.802 LogLevel FATAL 00:03:08.802 ForwardAgent yes 00:03:08.802 ForwardX11 yes 00:03:08.802 00:03:08.815 [Pipeline] withEnv 00:03:08.818 [Pipeline] { 00:03:08.832 [Pipeline] sh 00:03:09.112 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:09.112 source /etc/os-release 00:03:09.112 [[ -e /image.version ]] && img=$(< /image.version) 00:03:09.112 # Minimal, systemd-like check. 00:03:09.112 if [[ -e /.dockerenv ]]; then 00:03:09.112 # Clear garbage from the node's name: 00:03:09.112 # agt-er_autotest_547-896 -> autotest_547-896 00:03:09.112 # $HOSTNAME is the actual container id 00:03:09.112 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:09.112 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:09.112 # We can assume this is a mount from a host where container is running, 00:03:09.112 # so fetch its hostname to easily identify the target swarm worker. 00:03:09.112 container="$(< /etc/hostname) ($agent)" 00:03:09.112 else 00:03:09.112 # Fallback 00:03:09.112 container=$agent 00:03:09.112 fi 00:03:09.112 fi 00:03:09.112 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:09.112 00:03:09.383 [Pipeline] } 00:03:09.403 [Pipeline] // withEnv 00:03:09.411 [Pipeline] setCustomBuildProperty 00:03:09.429 [Pipeline] stage 00:03:09.432 [Pipeline] { (Tests) 00:03:09.451 [Pipeline] sh 00:03:09.775 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:09.789 [Pipeline] sh 00:03:10.066 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:10.084 [Pipeline] timeout 00:03:10.084 Timeout set to expire in 40 min 00:03:10.086 [Pipeline] { 00:03:10.108 [Pipeline] sh 00:03:10.385 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:10.950 HEAD is now at c0d54772e test/common: Include test/nvme in the reap_spdk_processes() lookup 00:03:10.961 [Pipeline] sh 00:03:11.238 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:11.507 [Pipeline] sh 00:03:11.786 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:12.059 [Pipeline] sh 00:03:12.340 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:03:12.598 ++ readlink -f spdk_repo 00:03:12.598 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:12.598 + [[ -n /home/vagrant/spdk_repo ]] 00:03:12.598 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:12.598 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:12.598 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:12.598 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:12.598 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:12.598 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:03:12.598 + cd /home/vagrant/spdk_repo 00:03:12.598 + source /etc/os-release 00:03:12.598 ++ NAME='Fedora Linux' 00:03:12.598 ++ VERSION='38 (Cloud Edition)' 00:03:12.598 ++ ID=fedora 00:03:12.598 ++ VERSION_ID=38 00:03:12.598 ++ VERSION_CODENAME= 00:03:12.598 ++ PLATFORM_ID=platform:f38 00:03:12.598 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:12.598 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:12.598 ++ LOGO=fedora-logo-icon 00:03:12.598 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:12.598 ++ HOME_URL=https://fedoraproject.org/ 00:03:12.599 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:12.599 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:12.599 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:12.599 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:12.599 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:12.599 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:12.599 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:12.599 ++ SUPPORT_END=2024-05-14 00:03:12.599 ++ VARIANT='Cloud Edition' 00:03:12.599 ++ VARIANT_ID=cloud 00:03:12.599 + uname -a 00:03:12.599 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:12.599 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:13.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:13.204 Hugepages 00:03:13.204 node hugesize free / total 00:03:13.204 node0 1048576kB 0 / 0 00:03:13.204 node0 2048kB 0 / 0 00:03:13.204 00:03:13.204 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:13.204 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:13.204 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:13.204 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:13.204 + rm -f /tmp/spdk-ld-path 00:03:13.204 + source autorun-spdk.conf 00:03:13.204 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.204 ++ SPDK_TEST_NVMF=1 00:03:13.204 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:13.204 ++ SPDK_TEST_USDT=1 00:03:13.204 ++ SPDK_TEST_NVMF_MDNS=1 00:03:13.204 ++ SPDK_RUN_UBSAN=1 00:03:13.204 ++ NET_TYPE=virt 00:03:13.204 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:13.204 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:13.204 ++ RUN_NIGHTLY=0 00:03:13.204 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:13.204 + [[ -n '' ]] 00:03:13.204 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:13.204 + for M in /var/spdk/build-*-manifest.txt 00:03:13.204 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:13.204 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:13.204 + for M in /var/spdk/build-*-manifest.txt 00:03:13.204 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:13.204 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:13.204 ++ uname 00:03:13.204 + [[ Linux == \L\i\n\u\x ]] 00:03:13.204 + sudo dmesg -T 00:03:13.204 + sudo dmesg --clear 00:03:13.204 + dmesg_pid=5328 00:03:13.204 + sudo dmesg -Tw 00:03:13.204 + [[ Fedora Linux == FreeBSD ]] 00:03:13.204 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:13.204 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:13.205 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:13.205 + [[ -x /usr/src/fio-static/fio ]] 00:03:13.205 + export FIO_BIN=/usr/src/fio-static/fio 00:03:13.205 + FIO_BIN=/usr/src/fio-static/fio 00:03:13.205 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:13.205 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:13.205 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:13.205 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:13.205 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:13.205 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:13.205 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:13.205 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:13.205 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:13.205 Test configuration: 00:03:13.205 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.205 SPDK_TEST_NVMF=1 00:03:13.205 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:13.205 SPDK_TEST_USDT=1 00:03:13.205 SPDK_TEST_NVMF_MDNS=1 00:03:13.205 SPDK_RUN_UBSAN=1 00:03:13.205 NET_TYPE=virt 00:03:13.205 SPDK_JSONRPC_GO_CLIENT=1 00:03:13.205 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:13.463 RUN_NIGHTLY=0 07:17:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:13.463 07:17:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:13.463 07:17:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.463 07:17:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.463 07:17:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.463 07:17:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.463 07:17:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.463 07:17:46 -- paths/export.sh@5 -- $ export PATH 00:03:13.463 07:17:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.463 07:17:46 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:13.463 07:17:46 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:13.463 07:17:46 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721891866.XXXXXX 00:03:13.463 07:17:46 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721891866.fPqi6X 00:03:13.463 07:17:46 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:13.463 07:17:46 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:13.463 07:17:46 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:13.463 07:17:46 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:13.463 07:17:46 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:13.463 07:17:46 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:13.463 07:17:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:03:13.463 07:17:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.463 07:17:46 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:03:13.463 07:17:46 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:13.463 07:17:46 -- pm/common@17 -- $ local monitor 00:03:13.463 07:17:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.463 07:17:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.463 07:17:46 -- pm/common@25 -- $ sleep 1 00:03:13.463 07:17:46 -- pm/common@21 -- $ date +%s 00:03:13.463 07:17:46 -- pm/common@21 -- $ date +%s 00:03:13.463 07:17:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721891866 00:03:13.463 07:17:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721891866 00:03:13.463 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721891866_collect-vmstat.pm.log 00:03:13.463 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721891866_collect-cpu-load.pm.log 00:03:14.398 07:17:47 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:14.398 07:17:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:14.398 07:17:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:14.398 07:17:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.398 07:17:47 -- spdk/autobuild.sh@16 -- $ date -u 00:03:14.398 Thu Jul 25 07:17:47 AM UTC 2024 00:03:14.398 07:17:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:14.398 v24.09-pre-310-gc0d54772e 00:03:14.398 07:17:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:14.398 07:17:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:14.398 07:17:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:14.398 07:17:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:14.398 07:17:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:14.398 07:17:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.398 ************************************ 00:03:14.398 START TEST ubsan 00:03:14.398 ************************************ 00:03:14.398 using ubsan 00:03:14.398 07:17:47 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:03:14.398 00:03:14.398 real 0m0.000s 00:03:14.398 user 0m0.000s 00:03:14.398 sys 0m0.000s 00:03:14.398 07:17:47 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:14.398 07:17:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:14.398 ************************************ 00:03:14.398 END TEST ubsan 00:03:14.398 ************************************ 00:03:14.656 07:17:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:14.656 07:17:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.656 07:17:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.656 07:17:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.657 07:17:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.657 07:17:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.657 07:17:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.657 07:17:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.657 07:17:47 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:03:14.657 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:14.657 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:15.225 Using 'verbs' RDMA provider 00:03:31.105 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:45.993 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:45.993 go version go1.21.1 linux/amd64 00:03:45.993 Creating mk/config.mk...done. 00:03:45.993 Creating mk/cc.flags.mk...done. 00:03:45.993 Type 'make' to build. 00:03:45.993 07:18:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:45.993 07:18:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:45.993 07:18:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:45.993 07:18:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.993 ************************************ 00:03:45.993 START TEST make 00:03:45.993 ************************************ 00:03:45.993 07:18:18 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:45.993 make[1]: Nothing to be done for 'all'. 00:03:58.211 The Meson build system 00:03:58.211 Version: 1.3.1 00:03:58.211 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:58.211 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:58.211 Build type: native build 00:03:58.211 Program cat found: YES (/usr/bin/cat) 00:03:58.211 Project name: DPDK 00:03:58.211 Project version: 24.03.0 00:03:58.211 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:58.211 C linker for the host machine: cc ld.bfd 2.39-16 00:03:58.211 Host machine cpu family: x86_64 00:03:58.211 Host machine cpu: x86_64 00:03:58.211 Message: ## Building in Developer Mode ## 00:03:58.211 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:58.211 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:58.211 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:58.211 Program python3 found: YES (/usr/bin/python3) 00:03:58.211 Program cat found: YES (/usr/bin/cat) 00:03:58.211 Compiler for C supports arguments -march=native: YES 00:03:58.211 Checking for size of "void *" : 8 00:03:58.211 Checking for size of "void *" : 8 (cached) 00:03:58.211 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:58.211 Library m found: YES 00:03:58.211 Library numa found: YES 00:03:58.212 Has header "numaif.h" : YES 00:03:58.212 Library fdt found: NO 00:03:58.212 Library execinfo found: NO 00:03:58.212 Has header "execinfo.h" : YES 00:03:58.212 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:58.212 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:58.212 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:58.212 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:58.212 Run-time dependency openssl found: YES 3.0.9 00:03:58.212 Run-time dependency libpcap found: YES 1.10.4 00:03:58.212 Has header "pcap.h" with dependency libpcap: YES 00:03:58.212 Compiler for C supports arguments -Wcast-qual: YES 00:03:58.212 Compiler for C supports arguments -Wdeprecated: YES 00:03:58.212 Compiler for C supports arguments -Wformat: YES 00:03:58.212 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:58.212 Compiler for C supports arguments -Wformat-security: NO 00:03:58.212 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:58.212 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:58.212 Compiler for C supports arguments -Wnested-externs: YES 00:03:58.212 Compiler for C supports arguments -Wold-style-definition: YES 00:03:58.212 Compiler for C supports arguments -Wpointer-arith: YES 00:03:58.212 Compiler for C supports arguments -Wsign-compare: YES 00:03:58.212 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:58.212 Compiler for C supports arguments -Wundef: YES 00:03:58.212 Compiler for C supports arguments -Wwrite-strings: YES 00:03:58.212 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:58.212 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:58.212 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:58.212 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:58.212 Program objdump found: YES (/usr/bin/objdump) 00:03:58.212 Compiler for C supports arguments -mavx512f: YES 00:03:58.212 Checking if "AVX512 checking" compiles: YES 00:03:58.212 Fetching value of define "__SSE4_2__" : 1 00:03:58.212 Fetching value of define "__AES__" : 1 00:03:58.212 Fetching value of define "__AVX__" : 1 00:03:58.212 Fetching value of define "__AVX2__" : 1 00:03:58.212 Fetching value of define "__AVX512BW__" : 1 00:03:58.212 Fetching value of define "__AVX512CD__" : 1 00:03:58.212 Fetching value of define "__AVX512DQ__" : 1 00:03:58.212 Fetching value of define "__AVX512F__" : 1 00:03:58.212 Fetching value of define "__AVX512VL__" : 1 00:03:58.212 Fetching value of define "__PCLMUL__" : 1 00:03:58.212 Fetching value of define "__RDRND__" : 1 00:03:58.212 Fetching value of define "__RDSEED__" : 1 00:03:58.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:58.212 Fetching value of define "__znver1__" : (undefined) 00:03:58.212 Fetching value of define "__znver2__" : (undefined) 00:03:58.212 Fetching value of define "__znver3__" : (undefined) 00:03:58.212 Fetching value of define "__znver4__" : (undefined) 00:03:58.212 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:58.212 Message: lib/log: Defining dependency "log" 00:03:58.212 Message: lib/kvargs: Defining dependency "kvargs" 00:03:58.212 Message: lib/telemetry: Defining dependency "telemetry" 00:03:58.212 Checking for function "getentropy" : NO 00:03:58.212 Message: lib/eal: Defining dependency "eal" 00:03:58.212 Message: lib/ring: Defining dependency "ring" 00:03:58.212 Message: lib/rcu: Defining dependency "rcu" 00:03:58.212 Message: lib/mempool: Defining dependency "mempool" 00:03:58.212 Message: lib/mbuf: Defining dependency "mbuf" 00:03:58.212 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:58.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:58.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:58.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:58.212 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:58.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:58.212 Compiler for C supports arguments -mpclmul: YES 00:03:58.212 Compiler for C supports arguments -maes: YES 00:03:58.212 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:58.212 Compiler for C supports arguments -mavx512bw: YES 00:03:58.212 Compiler for C supports arguments -mavx512dq: YES 00:03:58.212 Compiler for C supports arguments -mavx512vl: YES 00:03:58.212 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:58.212 Compiler for C supports arguments -mavx2: YES 00:03:58.212 Compiler for C supports arguments -mavx: YES 00:03:58.212 Message: lib/net: Defining dependency "net" 00:03:58.212 Message: lib/meter: Defining dependency "meter" 00:03:58.212 Message: lib/ethdev: Defining dependency "ethdev" 00:03:58.212 Message: lib/pci: Defining dependency "pci" 00:03:58.212 Message: lib/cmdline: Defining dependency "cmdline" 00:03:58.212 Message: lib/hash: Defining dependency "hash" 00:03:58.212 Message: lib/timer: Defining dependency "timer" 00:03:58.212 Message: lib/compressdev: Defining dependency "compressdev" 00:03:58.212 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:58.212 Message: lib/dmadev: Defining dependency "dmadev" 00:03:58.212 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:58.212 Message: lib/power: Defining dependency "power" 00:03:58.212 Message: lib/reorder: Defining dependency "reorder" 00:03:58.212 Message: lib/security: Defining dependency "security" 00:03:58.212 Has header "linux/userfaultfd.h" : YES 00:03:58.212 Has header "linux/vduse.h" : YES 00:03:58.212 Message: lib/vhost: Defining dependency "vhost" 00:03:58.212 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:58.212 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:58.212 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:58.212 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:58.212 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:58.212 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:58.212 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:58.212 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:58.212 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:58.212 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:58.212 Program doxygen found: YES (/usr/bin/doxygen) 00:03:58.212 Configuring doxy-api-html.conf using configuration 00:03:58.212 Configuring doxy-api-man.conf using configuration 00:03:58.212 Program mandb found: YES (/usr/bin/mandb) 00:03:58.212 Program sphinx-build found: NO 00:03:58.212 Configuring rte_build_config.h using configuration 00:03:58.212 Message: 00:03:58.212 ================= 00:03:58.212 Applications Enabled 00:03:58.212 ================= 00:03:58.212 00:03:58.212 apps: 00:03:58.212 00:03:58.212 00:03:58.212 Message: 00:03:58.212 ================= 00:03:58.212 Libraries Enabled 00:03:58.212 ================= 00:03:58.212 00:03:58.212 libs: 00:03:58.212 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:58.212 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:58.212 cryptodev, dmadev, power, reorder, security, vhost, 00:03:58.212 00:03:58.212 Message: 00:03:58.212 =============== 00:03:58.212 Drivers Enabled 00:03:58.212 =============== 00:03:58.212 00:03:58.212 common: 00:03:58.212 00:03:58.212 bus: 00:03:58.212 pci, vdev, 00:03:58.212 mempool: 00:03:58.212 ring, 00:03:58.212 dma: 00:03:58.212 00:03:58.212 net: 00:03:58.212 00:03:58.212 crypto: 00:03:58.212 00:03:58.212 compress: 00:03:58.212 00:03:58.212 vdpa: 00:03:58.212 00:03:58.212 00:03:58.212 Message: 00:03:58.212 ================= 00:03:58.212 Content Skipped 00:03:58.212 ================= 00:03:58.212 00:03:58.212 apps: 00:03:58.212 dumpcap: explicitly disabled via build config 00:03:58.212 graph: explicitly disabled via build config 00:03:58.212 pdump: explicitly disabled via build config 00:03:58.212 proc-info: explicitly disabled via build config 00:03:58.212 test-acl: explicitly disabled via build config 00:03:58.212 test-bbdev: explicitly disabled via build config 00:03:58.212 test-cmdline: explicitly disabled via build config 00:03:58.212 test-compress-perf: explicitly disabled via build config 00:03:58.212 test-crypto-perf: explicitly disabled via build config 00:03:58.212 test-dma-perf: explicitly disabled via build config 00:03:58.212 test-eventdev: explicitly disabled via build config 00:03:58.212 test-fib: explicitly disabled via build config 00:03:58.212 test-flow-perf: explicitly disabled via build config 00:03:58.212 test-gpudev: explicitly disabled via build config 00:03:58.212 test-mldev: explicitly disabled via build config 00:03:58.212 test-pipeline: explicitly disabled via build config 00:03:58.212 test-pmd: explicitly disabled via build config 00:03:58.212 test-regex: explicitly disabled via build config 00:03:58.212 test-sad: explicitly disabled via build config 00:03:58.212 test-security-perf: explicitly disabled via build config 00:03:58.212 00:03:58.212 libs: 00:03:58.212 argparse: explicitly disabled via build config 00:03:58.212 metrics: explicitly disabled via build config 00:03:58.212 acl: explicitly disabled via build config 00:03:58.212 bbdev: explicitly disabled via build config 00:03:58.212 bitratestats: explicitly disabled via build config 00:03:58.212 bpf: explicitly disabled via build config 00:03:58.212 cfgfile: explicitly disabled via build config 00:03:58.212 distributor: explicitly disabled via build config 00:03:58.212 efd: explicitly disabled via build config 00:03:58.212 eventdev: explicitly disabled via build config 00:03:58.212 dispatcher: explicitly disabled via build config 00:03:58.212 gpudev: explicitly disabled via build config 00:03:58.212 gro: explicitly disabled via build config 00:03:58.212 gso: explicitly disabled via build config 00:03:58.212 ip_frag: explicitly disabled via build config 00:03:58.212 jobstats: explicitly disabled via build config 00:03:58.212 latencystats: explicitly disabled via build config 00:03:58.212 lpm: explicitly disabled via build config 00:03:58.212 member: explicitly disabled via build config 00:03:58.212 pcapng: explicitly disabled via build config 00:03:58.213 rawdev: explicitly disabled via build config 00:03:58.213 regexdev: explicitly disabled via build config 00:03:58.213 mldev: explicitly disabled via build config 00:03:58.213 rib: explicitly disabled via build config 00:03:58.213 sched: explicitly disabled via build config 00:03:58.213 stack: explicitly disabled via build config 00:03:58.213 ipsec: explicitly disabled via build config 00:03:58.213 pdcp: explicitly disabled via build config 00:03:58.213 fib: explicitly disabled via build config 00:03:58.213 port: explicitly disabled via build config 00:03:58.213 pdump: explicitly disabled via build config 00:03:58.213 table: explicitly disabled via build config 00:03:58.213 pipeline: explicitly disabled via build config 00:03:58.213 graph: explicitly disabled via build config 00:03:58.213 node: explicitly disabled via build config 00:03:58.213 00:03:58.213 drivers: 00:03:58.213 common/cpt: not in enabled drivers build config 00:03:58.213 common/dpaax: not in enabled drivers build config 00:03:58.213 common/iavf: not in enabled drivers build config 00:03:58.213 common/idpf: not in enabled drivers build config 00:03:58.213 common/ionic: not in enabled drivers build config 00:03:58.213 common/mvep: not in enabled drivers build config 00:03:58.213 common/octeontx: not in enabled drivers build config 00:03:58.213 bus/auxiliary: not in enabled drivers build config 00:03:58.213 bus/cdx: not in enabled drivers build config 00:03:58.213 bus/dpaa: not in enabled drivers build config 00:03:58.213 bus/fslmc: not in enabled drivers build config 00:03:58.213 bus/ifpga: not in enabled drivers build config 00:03:58.213 bus/platform: not in enabled drivers build config 00:03:58.213 bus/uacce: not in enabled drivers build config 00:03:58.213 bus/vmbus: not in enabled drivers build config 00:03:58.213 common/cnxk: not in enabled drivers build config 00:03:58.213 common/mlx5: not in enabled drivers build config 00:03:58.213 common/nfp: not in enabled drivers build config 00:03:58.213 common/nitrox: not in enabled drivers build config 00:03:58.213 common/qat: not in enabled drivers build config 00:03:58.213 common/sfc_efx: not in enabled drivers build config 00:03:58.213 mempool/bucket: not in enabled drivers build config 00:03:58.213 mempool/cnxk: not in enabled drivers build config 00:03:58.213 mempool/dpaa: not in enabled drivers build config 00:03:58.213 mempool/dpaa2: not in enabled drivers build config 00:03:58.213 mempool/octeontx: not in enabled drivers build config 00:03:58.213 mempool/stack: not in enabled drivers build config 00:03:58.213 dma/cnxk: not in enabled drivers build config 00:03:58.213 dma/dpaa: not in enabled drivers build config 00:03:58.213 dma/dpaa2: not in enabled drivers build config 00:03:58.213 dma/hisilicon: not in enabled drivers build config 00:03:58.213 dma/idxd: not in enabled drivers build config 00:03:58.213 dma/ioat: not in enabled drivers build config 00:03:58.213 dma/skeleton: not in enabled drivers build config 00:03:58.213 net/af_packet: not in enabled drivers build config 00:03:58.213 net/af_xdp: not in enabled drivers build config 00:03:58.213 net/ark: not in enabled drivers build config 00:03:58.213 net/atlantic: not in enabled drivers build config 00:03:58.213 net/avp: not in enabled drivers build config 00:03:58.213 net/axgbe: not in enabled drivers build config 00:03:58.213 net/bnx2x: not in enabled drivers build config 00:03:58.213 net/bnxt: not in enabled drivers build config 00:03:58.213 net/bonding: not in enabled drivers build config 00:03:58.213 net/cnxk: not in enabled drivers build config 00:03:58.213 net/cpfl: not in enabled drivers build config 00:03:58.213 net/cxgbe: not in enabled drivers build config 00:03:58.213 net/dpaa: not in enabled drivers build config 00:03:58.213 net/dpaa2: not in enabled drivers build config 00:03:58.213 net/e1000: not in enabled drivers build config 00:03:58.213 net/ena: not in enabled drivers build config 00:03:58.213 net/enetc: not in enabled drivers build config 00:03:58.213 net/enetfec: not in enabled drivers build config 00:03:58.213 net/enic: not in enabled drivers build config 00:03:58.213 net/failsafe: not in enabled drivers build config 00:03:58.213 net/fm10k: not in enabled drivers build config 00:03:58.213 net/gve: not in enabled drivers build config 00:03:58.213 net/hinic: not in enabled drivers build config 00:03:58.213 net/hns3: not in enabled drivers build config 00:03:58.213 net/i40e: not in enabled drivers build config 00:03:58.213 net/iavf: not in enabled drivers build config 00:03:58.213 net/ice: not in enabled drivers build config 00:03:58.213 net/idpf: not in enabled drivers build config 00:03:58.213 net/igc: not in enabled drivers build config 00:03:58.213 net/ionic: not in enabled drivers build config 00:03:58.213 net/ipn3ke: not in enabled drivers build config 00:03:58.213 net/ixgbe: not in enabled drivers build config 00:03:58.213 net/mana: not in enabled drivers build config 00:03:58.213 net/memif: not in enabled drivers build config 00:03:58.213 net/mlx4: not in enabled drivers build config 00:03:58.213 net/mlx5: not in enabled drivers build config 00:03:58.213 net/mvneta: not in enabled drivers build config 00:03:58.213 net/mvpp2: not in enabled drivers build config 00:03:58.213 net/netvsc: not in enabled drivers build config 00:03:58.213 net/nfb: not in enabled drivers build config 00:03:58.213 net/nfp: not in enabled drivers build config 00:03:58.213 net/ngbe: not in enabled drivers build config 00:03:58.213 net/null: not in enabled drivers build config 00:03:58.213 net/octeontx: not in enabled drivers build config 00:03:58.213 net/octeon_ep: not in enabled drivers build config 00:03:58.213 net/pcap: not in enabled drivers build config 00:03:58.213 net/pfe: not in enabled drivers build config 00:03:58.213 net/qede: not in enabled drivers build config 00:03:58.213 net/ring: not in enabled drivers build config 00:03:58.213 net/sfc: not in enabled drivers build config 00:03:58.213 net/softnic: not in enabled drivers build config 00:03:58.213 net/tap: not in enabled drivers build config 00:03:58.213 net/thunderx: not in enabled drivers build config 00:03:58.213 net/txgbe: not in enabled drivers build config 00:03:58.213 net/vdev_netvsc: not in enabled drivers build config 00:03:58.213 net/vhost: not in enabled drivers build config 00:03:58.213 net/virtio: not in enabled drivers build config 00:03:58.213 net/vmxnet3: not in enabled drivers build config 00:03:58.213 raw/*: missing internal dependency, "rawdev" 00:03:58.213 crypto/armv8: not in enabled drivers build config 00:03:58.213 crypto/bcmfs: not in enabled drivers build config 00:03:58.213 crypto/caam_jr: not in enabled drivers build config 00:03:58.213 crypto/ccp: not in enabled drivers build config 00:03:58.213 crypto/cnxk: not in enabled drivers build config 00:03:58.213 crypto/dpaa_sec: not in enabled drivers build config 00:03:58.213 crypto/dpaa2_sec: not in enabled drivers build config 00:03:58.213 crypto/ipsec_mb: not in enabled drivers build config 00:03:58.213 crypto/mlx5: not in enabled drivers build config 00:03:58.213 crypto/mvsam: not in enabled drivers build config 00:03:58.213 crypto/nitrox: not in enabled drivers build config 00:03:58.213 crypto/null: not in enabled drivers build config 00:03:58.213 crypto/octeontx: not in enabled drivers build config 00:03:58.213 crypto/openssl: not in enabled drivers build config 00:03:58.213 crypto/scheduler: not in enabled drivers build config 00:03:58.213 crypto/uadk: not in enabled drivers build config 00:03:58.213 crypto/virtio: not in enabled drivers build config 00:03:58.213 compress/isal: not in enabled drivers build config 00:03:58.213 compress/mlx5: not in enabled drivers build config 00:03:58.213 compress/nitrox: not in enabled drivers build config 00:03:58.213 compress/octeontx: not in enabled drivers build config 00:03:58.213 compress/zlib: not in enabled drivers build config 00:03:58.213 regex/*: missing internal dependency, "regexdev" 00:03:58.213 ml/*: missing internal dependency, "mldev" 00:03:58.213 vdpa/ifc: not in enabled drivers build config 00:03:58.213 vdpa/mlx5: not in enabled drivers build config 00:03:58.213 vdpa/nfp: not in enabled drivers build config 00:03:58.213 vdpa/sfc: not in enabled drivers build config 00:03:58.213 event/*: missing internal dependency, "eventdev" 00:03:58.213 baseband/*: missing internal dependency, "bbdev" 00:03:58.213 gpu/*: missing internal dependency, "gpudev" 00:03:58.213 00:03:58.213 00:03:58.213 Build targets in project: 85 00:03:58.213 00:03:58.213 DPDK 24.03.0 00:03:58.213 00:03:58.213 User defined options 00:03:58.213 buildtype : debug 00:03:58.213 default_library : shared 00:03:58.213 libdir : lib 00:03:58.213 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:58.213 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:58.213 c_link_args : 00:03:58.213 cpu_instruction_set: native 00:03:58.213 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:58.213 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:58.213 enable_docs : false 00:03:58.213 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:58.213 enable_kmods : false 00:03:58.213 max_lcores : 128 00:03:58.213 tests : false 00:03:58.213 00:03:58.213 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:58.472 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:58.472 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:58.472 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:58.472 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:58.472 [4/268] Linking static target lib/librte_log.a 00:03:58.472 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:58.472 [6/268] Linking static target lib/librte_kvargs.a 00:03:59.037 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.037 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:59.037 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:59.037 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:59.037 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:59.037 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:59.037 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:59.037 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:59.037 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:59.037 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:59.037 [17/268] Linking static target lib/librte_telemetry.a 00:03:59.037 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:59.295 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.554 [20/268] Linking target lib/librte_log.so.24.1 00:03:59.554 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:59.554 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:59.554 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:59.813 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:59.813 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:59.813 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:59.813 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:59.813 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:59.814 [29/268] Linking target lib/librte_kvargs.so.24.1 00:03:59.814 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:59.814 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:59.814 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:00.072 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:00.072 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.072 [35/268] Linking target lib/librte_telemetry.so.24.1 00:04:00.333 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:00.333 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:00.333 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:00.333 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:00.333 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:00.333 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:00.333 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:00.333 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:00.333 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:00.333 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:00.595 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:00.595 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:00.595 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:00.595 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:00.853 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:00.853 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:01.112 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:01.112 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:01.112 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:01.112 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:01.112 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:01.112 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:01.112 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:01.371 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:01.371 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:01.371 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:01.371 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:01.630 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:01.630 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:01.630 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:01.630 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:01.889 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:01.889 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:01.889 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:01.889 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:02.148 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:02.148 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:02.149 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:02.149 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:02.149 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:02.408 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:02.408 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:02.408 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:02.408 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:02.408 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:02.408 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:02.679 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:02.679 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:02.679 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:02.679 [85/268] Linking static target lib/librte_eal.a 00:04:02.944 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:02.944 [87/268] Linking static target lib/librte_ring.a 00:04:02.944 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:03.203 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:03.203 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:03.203 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:03.203 [92/268] Linking static target lib/librte_mempool.a 00:04:03.203 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:03.203 [94/268] Linking static target lib/librte_rcu.a 00:04:03.203 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:03.462 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.462 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:03.462 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:03.462 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:03.462 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:03.462 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:03.462 [102/268] Linking static target lib/librte_mbuf.a 00:04:03.721 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:03.721 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.721 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:03.721 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:03.721 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:03.721 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:03.721 [109/268] Linking static target lib/librte_net.a 00:04:03.980 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:03.980 [111/268] Linking static target lib/librte_meter.a 00:04:03.980 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:04.248 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:04.248 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:04.248 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.248 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:04.248 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.248 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.518 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.518 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:04.518 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:04.518 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:04.777 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:04.777 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:05.037 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:05.037 [126/268] Linking static target lib/librte_pci.a 00:04:05.037 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:05.037 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:05.037 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:05.037 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:05.296 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:05.296 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:05.296 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:05.296 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:05.296 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:05.296 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:05.296 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.296 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:05.296 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:05.296 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:05.296 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:05.296 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:05.296 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:05.296 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:05.555 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:05.555 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:05.814 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:05.814 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:05.814 [149/268] Linking static target lib/librte_timer.a 00:04:05.814 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:05.814 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:05.814 [152/268] Linking static target lib/librte_cmdline.a 00:04:06.073 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:06.073 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:06.332 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:06.332 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:06.332 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:06.332 [158/268] Linking static target lib/librte_ethdev.a 00:04:06.332 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.332 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:06.332 [161/268] Linking static target lib/librte_compressdev.a 00:04:06.332 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:06.591 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:06.591 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:06.591 [165/268] Linking static target lib/librte_hash.a 00:04:06.591 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:06.851 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:06.851 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:06.851 [169/268] Linking static target lib/librte_dmadev.a 00:04:06.851 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:06.851 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:06.851 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:07.110 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:07.110 [174/268] Linking static target lib/librte_cryptodev.a 00:04:07.110 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:07.369 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:07.369 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:07.369 [178/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.369 [179/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.629 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:07.629 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.629 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:07.629 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:07.887 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.887 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:07.887 [186/268] Linking static target lib/librte_power.a 00:04:07.887 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:08.174 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:08.174 [189/268] Linking static target lib/librte_security.a 00:04:08.174 [190/268] Linking static target lib/librte_reorder.a 00:04:08.174 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:08.174 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:08.446 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:08.446 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.446 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:08.705 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.964 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:08.964 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.964 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:08.964 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:08.964 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:09.222 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:09.481 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:09.481 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:09.481 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:09.481 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:09.481 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.481 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:09.481 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:09.740 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:09.740 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:09.740 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:09.740 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:09.740 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:09.740 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:09.740 [216/268] Linking static target drivers/librte_bus_pci.a 00:04:10.003 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:10.003 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:10.003 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:10.003 [220/268] Linking static target drivers/librte_bus_vdev.a 00:04:10.003 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:10.003 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:10.003 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:10.263 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:10.263 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:10.263 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.263 [227/268] Linking static target drivers/librte_mempool_ring.a 00:04:10.522 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.091 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:11.091 [230/268] Linking static target lib/librte_vhost.a 00:04:13.654 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.654 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.913 [233/268] Linking target lib/librte_eal.so.24.1 00:04:13.913 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:13.913 [235/268] Linking target lib/librte_pci.so.24.1 00:04:13.913 [236/268] Linking target lib/librte_ring.so.24.1 00:04:13.913 [237/268] Linking target lib/librte_timer.so.24.1 00:04:13.913 [238/268] Linking target lib/librte_meter.so.24.1 00:04:13.913 [239/268] Linking target lib/librte_dmadev.so.24.1 00:04:13.913 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:14.172 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:14.172 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:14.172 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:14.172 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:14.172 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:14.172 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:14.172 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:14.172 [248/268] Linking target lib/librte_rcu.so.24.1 00:04:14.432 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:14.432 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:14.432 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:14.432 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:14.432 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:14.691 [254/268] Linking target lib/librte_compressdev.so.24.1 00:04:14.691 [255/268] Linking target lib/librte_net.so.24.1 00:04:14.691 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:14.691 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:04:14.691 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:14.691 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:14.691 [260/268] Linking target lib/librte_cmdline.so.24.1 00:04:14.691 [261/268] Linking target lib/librte_hash.so.24.1 00:04:14.691 [262/268] Linking target lib/librte_security.so.24.1 00:04:14.949 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:15.885 [264/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.885 [265/268] Linking target lib/librte_ethdev.so.24.1 00:04:15.885 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:16.143 [267/268] Linking target lib/librte_power.so.24.1 00:04:16.143 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:16.143 INFO: autodetecting backend as ninja 00:04:16.143 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:17.521 CC lib/log/log_flags.o 00:04:17.521 CC lib/log/log.o 00:04:17.521 CC lib/log/log_deprecated.o 00:04:17.521 CC lib/ut/ut.o 00:04:17.521 CC lib/ut_mock/mock.o 00:04:17.521 LIB libspdk_log.a 00:04:17.521 LIB libspdk_ut.a 00:04:17.521 SO libspdk_log.so.7.0 00:04:17.521 SO libspdk_ut.so.2.0 00:04:17.521 LIB libspdk_ut_mock.a 00:04:17.521 SYMLINK libspdk_ut.so 00:04:17.521 SO libspdk_ut_mock.so.6.0 00:04:17.521 SYMLINK libspdk_log.so 00:04:17.521 SYMLINK libspdk_ut_mock.so 00:04:17.781 CC lib/dma/dma.o 00:04:17.781 CXX lib/trace_parser/trace.o 00:04:17.781 CC lib/util/base64.o 00:04:17.781 CC lib/util/bit_array.o 00:04:17.781 CC lib/util/cpuset.o 00:04:17.781 CC lib/util/crc16.o 00:04:17.781 CC lib/util/crc32.o 00:04:17.781 CC lib/util/crc32c.o 00:04:17.781 CC lib/ioat/ioat.o 00:04:18.040 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.040 CC lib/util/crc32_ieee.o 00:04:18.040 CC lib/util/crc64.o 00:04:18.040 CC lib/util/dif.o 00:04:18.040 CC lib/util/fd.o 00:04:18.040 LIB libspdk_dma.a 00:04:18.040 CC lib/util/fd_group.o 00:04:18.040 SO libspdk_dma.so.4.0 00:04:18.040 CC lib/util/file.o 00:04:18.040 CC lib/vfio_user/host/vfio_user.o 00:04:18.040 SYMLINK libspdk_dma.so 00:04:18.040 CC lib/util/hexlify.o 00:04:18.040 LIB libspdk_ioat.a 00:04:18.040 CC lib/util/iov.o 00:04:18.040 SO libspdk_ioat.so.7.0 00:04:18.298 CC lib/util/math.o 00:04:18.298 SYMLINK libspdk_ioat.so 00:04:18.298 CC lib/util/net.o 00:04:18.298 CC lib/util/pipe.o 00:04:18.298 CC lib/util/strerror_tls.o 00:04:18.298 CC lib/util/string.o 00:04:18.298 CC lib/util/uuid.o 00:04:18.298 LIB libspdk_vfio_user.a 00:04:18.298 CC lib/util/xor.o 00:04:18.298 SO libspdk_vfio_user.so.5.0 00:04:18.298 CC lib/util/zipf.o 00:04:18.298 SYMLINK libspdk_vfio_user.so 00:04:18.558 LIB libspdk_util.a 00:04:18.558 SO libspdk_util.so.10.0 00:04:18.817 SYMLINK libspdk_util.so 00:04:18.817 LIB libspdk_trace_parser.a 00:04:18.817 SO libspdk_trace_parser.so.5.0 00:04:18.817 SYMLINK libspdk_trace_parser.so 00:04:19.076 CC lib/conf/conf.o 00:04:19.076 CC lib/rdma_provider/common.o 00:04:19.076 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.076 CC lib/rdma_utils/rdma_utils.o 00:04:19.076 CC lib/env_dpdk/memory.o 00:04:19.076 CC lib/env_dpdk/env.o 00:04:19.076 CC lib/env_dpdk/pci.o 00:04:19.076 CC lib/idxd/idxd.o 00:04:19.076 CC lib/json/json_parse.o 00:04:19.076 CC lib/vmd/vmd.o 00:04:19.076 CC lib/json/json_util.o 00:04:19.076 LIB libspdk_rdma_provider.a 00:04:19.076 LIB libspdk_conf.a 00:04:19.076 SO libspdk_rdma_provider.so.6.0 00:04:19.336 SO libspdk_conf.so.6.0 00:04:19.336 SYMLINK libspdk_rdma_provider.so 00:04:19.336 CC lib/json/json_write.o 00:04:19.336 LIB libspdk_rdma_utils.a 00:04:19.336 SYMLINK libspdk_conf.so 00:04:19.336 CC lib/idxd/idxd_user.o 00:04:19.336 CC lib/env_dpdk/init.o 00:04:19.336 SO libspdk_rdma_utils.so.1.0 00:04:19.336 CC lib/vmd/led.o 00:04:19.336 SYMLINK libspdk_rdma_utils.so 00:04:19.336 CC lib/idxd/idxd_kernel.o 00:04:19.336 CC lib/env_dpdk/threads.o 00:04:19.336 CC lib/env_dpdk/pci_ioat.o 00:04:19.595 CC lib/env_dpdk/pci_virtio.o 00:04:19.595 CC lib/env_dpdk/pci_vmd.o 00:04:19.595 CC lib/env_dpdk/pci_idxd.o 00:04:19.595 LIB libspdk_idxd.a 00:04:19.595 LIB libspdk_json.a 00:04:19.595 SO libspdk_idxd.so.12.0 00:04:19.595 SO libspdk_json.so.6.0 00:04:19.595 CC lib/env_dpdk/pci_event.o 00:04:19.595 LIB libspdk_vmd.a 00:04:19.595 CC lib/env_dpdk/sigbus_handler.o 00:04:19.595 CC lib/env_dpdk/pci_dpdk.o 00:04:19.595 SYMLINK libspdk_idxd.so 00:04:19.595 SO libspdk_vmd.so.6.0 00:04:19.595 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.595 SYMLINK libspdk_json.so 00:04:19.595 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.595 SYMLINK libspdk_vmd.so 00:04:19.854 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.855 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.855 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.855 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:20.113 LIB libspdk_jsonrpc.a 00:04:20.372 SO libspdk_jsonrpc.so.6.0 00:04:20.372 LIB libspdk_env_dpdk.a 00:04:20.372 SYMLINK libspdk_jsonrpc.so 00:04:20.372 SO libspdk_env_dpdk.so.15.0 00:04:20.631 SYMLINK libspdk_env_dpdk.so 00:04:20.631 CC lib/rpc/rpc.o 00:04:20.889 LIB libspdk_rpc.a 00:04:20.889 SO libspdk_rpc.so.6.0 00:04:20.889 SYMLINK libspdk_rpc.so 00:04:21.458 CC lib/trace/trace_flags.o 00:04:21.458 CC lib/trace/trace.o 00:04:21.458 CC lib/trace/trace_rpc.o 00:04:21.458 CC lib/notify/notify_rpc.o 00:04:21.458 CC lib/notify/notify.o 00:04:21.458 CC lib/keyring/keyring.o 00:04:21.458 CC lib/keyring/keyring_rpc.o 00:04:21.458 LIB libspdk_notify.a 00:04:21.458 LIB libspdk_trace.a 00:04:21.458 SO libspdk_notify.so.6.0 00:04:21.716 LIB libspdk_keyring.a 00:04:21.716 SO libspdk_trace.so.10.0 00:04:21.716 SYMLINK libspdk_notify.so 00:04:21.716 SO libspdk_keyring.so.1.0 00:04:21.716 SYMLINK libspdk_trace.so 00:04:21.716 SYMLINK libspdk_keyring.so 00:04:21.974 CC lib/thread/thread.o 00:04:21.974 CC lib/thread/iobuf.o 00:04:21.974 CC lib/sock/sock_rpc.o 00:04:21.974 CC lib/sock/sock.o 00:04:22.543 LIB libspdk_sock.a 00:04:22.543 SO libspdk_sock.so.10.0 00:04:22.543 SYMLINK libspdk_sock.so 00:04:23.112 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:23.112 CC lib/nvme/nvme_ctrlr.o 00:04:23.112 CC lib/nvme/nvme_fabric.o 00:04:23.112 CC lib/nvme/nvme_ns.o 00:04:23.112 CC lib/nvme/nvme_ns_cmd.o 00:04:23.112 CC lib/nvme/nvme_pcie.o 00:04:23.112 CC lib/nvme/nvme_qpair.o 00:04:23.112 CC lib/nvme/nvme_pcie_common.o 00:04:23.112 CC lib/nvme/nvme.o 00:04:23.371 LIB libspdk_thread.a 00:04:23.371 SO libspdk_thread.so.10.1 00:04:23.630 SYMLINK libspdk_thread.so 00:04:23.630 CC lib/nvme/nvme_quirks.o 00:04:23.630 CC lib/nvme/nvme_transport.o 00:04:23.630 CC lib/nvme/nvme_discovery.o 00:04:23.889 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.889 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.889 CC lib/nvme/nvme_tcp.o 00:04:23.889 CC lib/nvme/nvme_opal.o 00:04:23.889 CC lib/nvme/nvme_io_msg.o 00:04:24.148 CC lib/nvme/nvme_poll_group.o 00:04:24.148 CC lib/nvme/nvme_zns.o 00:04:24.406 CC lib/nvme/nvme_stubs.o 00:04:24.406 CC lib/nvme/nvme_auth.o 00:04:24.406 CC lib/nvme/nvme_cuse.o 00:04:24.406 CC lib/nvme/nvme_rdma.o 00:04:24.664 CC lib/accel/accel.o 00:04:24.664 CC lib/blob/blobstore.o 00:04:24.664 CC lib/blob/request.o 00:04:24.664 CC lib/blob/zeroes.o 00:04:24.923 CC lib/accel/accel_rpc.o 00:04:24.923 CC lib/blob/blob_bs_dev.o 00:04:24.923 CC lib/accel/accel_sw.o 00:04:25.182 CC lib/init/json_config.o 00:04:25.182 CC lib/init/subsystem.o 00:04:25.182 CC lib/init/subsystem_rpc.o 00:04:25.182 CC lib/init/rpc.o 00:04:25.182 CC lib/virtio/virtio.o 00:04:25.182 CC lib/virtio/virtio_vhost_user.o 00:04:25.442 CC lib/virtio/virtio_vfio_user.o 00:04:25.442 CC lib/virtio/virtio_pci.o 00:04:25.442 LIB libspdk_init.a 00:04:25.442 LIB libspdk_accel.a 00:04:25.442 SO libspdk_init.so.5.0 00:04:25.701 SO libspdk_accel.so.16.0 00:04:25.701 SYMLINK libspdk_init.so 00:04:25.701 SYMLINK libspdk_accel.so 00:04:25.701 LIB libspdk_virtio.a 00:04:25.701 LIB libspdk_nvme.a 00:04:25.701 SO libspdk_virtio.so.7.0 00:04:25.701 SYMLINK libspdk_virtio.so 00:04:25.960 SO libspdk_nvme.so.13.1 00:04:25.960 CC lib/bdev/bdev.o 00:04:25.960 CC lib/bdev/bdev_rpc.o 00:04:25.960 CC lib/bdev/bdev_zone.o 00:04:25.960 CC lib/bdev/part.o 00:04:25.960 CC lib/bdev/scsi_nvme.o 00:04:25.960 CC lib/event/reactor.o 00:04:25.960 CC lib/event/app.o 00:04:25.960 CC lib/event/log_rpc.o 00:04:26.220 CC lib/event/app_rpc.o 00:04:26.220 CC lib/event/scheduler_static.o 00:04:26.220 SYMLINK libspdk_nvme.so 00:04:26.478 LIB libspdk_event.a 00:04:26.478 SO libspdk_event.so.14.0 00:04:26.478 SYMLINK libspdk_event.so 00:04:27.416 LIB libspdk_blob.a 00:04:27.416 SO libspdk_blob.so.11.0 00:04:27.675 SYMLINK libspdk_blob.so 00:04:27.935 CC lib/lvol/lvol.o 00:04:27.935 CC lib/blobfs/blobfs.o 00:04:27.935 CC lib/blobfs/tree.o 00:04:28.195 LIB libspdk_bdev.a 00:04:28.501 SO libspdk_bdev.so.16.0 00:04:28.501 SYMLINK libspdk_bdev.so 00:04:28.780 CC lib/nvmf/ctrlr.o 00:04:28.780 CC lib/nvmf/ctrlr_discovery.o 00:04:28.780 CC lib/nvmf/subsystem.o 00:04:28.780 CC lib/nvmf/ctrlr_bdev.o 00:04:28.780 CC lib/nbd/nbd.o 00:04:28.780 CC lib/ftl/ftl_core.o 00:04:28.780 CC lib/ublk/ublk.o 00:04:28.780 CC lib/scsi/dev.o 00:04:28.780 LIB libspdk_blobfs.a 00:04:28.780 SO libspdk_blobfs.so.10.0 00:04:28.780 LIB libspdk_lvol.a 00:04:29.038 SO libspdk_lvol.so.10.0 00:04:29.038 SYMLINK libspdk_blobfs.so 00:04:29.038 CC lib/scsi/lun.o 00:04:29.038 SYMLINK libspdk_lvol.so 00:04:29.038 CC lib/scsi/port.o 00:04:29.038 CC lib/ublk/ublk_rpc.o 00:04:29.038 CC lib/ftl/ftl_init.o 00:04:29.038 CC lib/ftl/ftl_layout.o 00:04:29.038 CC lib/nbd/nbd_rpc.o 00:04:29.038 CC lib/ftl/ftl_debug.o 00:04:29.298 CC lib/ftl/ftl_io.o 00:04:29.298 CC lib/scsi/scsi.o 00:04:29.298 CC lib/nvmf/nvmf.o 00:04:29.298 LIB libspdk_ublk.a 00:04:29.298 LIB libspdk_nbd.a 00:04:29.298 SO libspdk_ublk.so.3.0 00:04:29.298 SO libspdk_nbd.so.7.0 00:04:29.298 CC lib/ftl/ftl_sb.o 00:04:29.298 CC lib/nvmf/nvmf_rpc.o 00:04:29.298 SYMLINK libspdk_ublk.so 00:04:29.298 CC lib/nvmf/transport.o 00:04:29.298 CC lib/scsi/scsi_bdev.o 00:04:29.558 SYMLINK libspdk_nbd.so 00:04:29.558 CC lib/nvmf/tcp.o 00:04:29.558 CC lib/ftl/ftl_l2p.o 00:04:29.558 CC lib/scsi/scsi_pr.o 00:04:29.558 CC lib/scsi/scsi_rpc.o 00:04:29.558 CC lib/ftl/ftl_l2p_flat.o 00:04:29.818 CC lib/scsi/task.o 00:04:29.818 CC lib/nvmf/stubs.o 00:04:29.818 CC lib/ftl/ftl_nv_cache.o 00:04:29.818 CC lib/nvmf/mdns_server.o 00:04:29.818 LIB libspdk_scsi.a 00:04:29.818 CC lib/nvmf/rdma.o 00:04:30.077 SO libspdk_scsi.so.9.0 00:04:30.077 CC lib/nvmf/auth.o 00:04:30.077 SYMLINK libspdk_scsi.so 00:04:30.077 CC lib/ftl/ftl_band.o 00:04:30.336 CC lib/ftl/ftl_band_ops.o 00:04:30.336 CC lib/iscsi/conn.o 00:04:30.336 CC lib/vhost/vhost.o 00:04:30.336 CC lib/vhost/vhost_rpc.o 00:04:30.336 CC lib/iscsi/init_grp.o 00:04:30.594 CC lib/iscsi/iscsi.o 00:04:30.594 CC lib/vhost/vhost_scsi.o 00:04:30.594 CC lib/iscsi/md5.o 00:04:30.850 CC lib/ftl/ftl_writer.o 00:04:30.850 CC lib/iscsi/param.o 00:04:30.850 CC lib/vhost/vhost_blk.o 00:04:30.850 CC lib/ftl/ftl_rq.o 00:04:30.850 CC lib/ftl/ftl_reloc.o 00:04:30.850 CC lib/iscsi/portal_grp.o 00:04:31.107 CC lib/iscsi/tgt_node.o 00:04:31.107 CC lib/ftl/ftl_l2p_cache.o 00:04:31.107 CC lib/ftl/ftl_p2l.o 00:04:31.107 CC lib/iscsi/iscsi_subsystem.o 00:04:31.107 CC lib/ftl/mngt/ftl_mngt.o 00:04:31.365 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:31.366 CC lib/iscsi/iscsi_rpc.o 00:04:31.366 CC lib/iscsi/task.o 00:04:31.366 CC lib/vhost/rte_vhost_user.o 00:04:31.366 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:31.623 LIB libspdk_iscsi.a 00:04:31.623 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:31.882 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:31.882 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:31.882 SO libspdk_iscsi.so.8.0 00:04:31.882 LIB libspdk_nvmf.a 00:04:31.882 CC lib/ftl/utils/ftl_conf.o 00:04:31.882 CC lib/ftl/utils/ftl_md.o 00:04:31.882 CC lib/ftl/utils/ftl_mempool.o 00:04:31.882 SO libspdk_nvmf.so.19.0 00:04:31.882 SYMLINK libspdk_iscsi.so 00:04:31.882 CC lib/ftl/utils/ftl_bitmap.o 00:04:31.882 CC lib/ftl/utils/ftl_property.o 00:04:31.882 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:32.141 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:32.141 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:32.141 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:32.141 SYMLINK libspdk_nvmf.so 00:04:32.141 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:32.141 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:32.141 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:32.141 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:32.141 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:32.141 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:32.400 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:32.400 CC lib/ftl/base/ftl_base_dev.o 00:04:32.400 CC lib/ftl/base/ftl_base_bdev.o 00:04:32.400 CC lib/ftl/ftl_trace.o 00:04:32.400 LIB libspdk_vhost.a 00:04:32.400 SO libspdk_vhost.so.8.0 00:04:32.659 SYMLINK libspdk_vhost.so 00:04:32.659 LIB libspdk_ftl.a 00:04:32.659 SO libspdk_ftl.so.9.0 00:04:33.228 SYMLINK libspdk_ftl.so 00:04:33.488 CC module/env_dpdk/env_dpdk_rpc.o 00:04:33.488 CC module/keyring/file/keyring.o 00:04:33.488 CC module/blob/bdev/blob_bdev.o 00:04:33.488 CC module/keyring/linux/keyring.o 00:04:33.488 CC module/sock/posix/posix.o 00:04:33.488 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.488 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:33.488 CC module/accel/ioat/accel_ioat.o 00:04:33.488 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:33.488 CC module/accel/error/accel_error.o 00:04:33.488 LIB libspdk_env_dpdk_rpc.a 00:04:33.748 SO libspdk_env_dpdk_rpc.so.6.0 00:04:33.748 CC module/keyring/linux/keyring_rpc.o 00:04:33.748 LIB libspdk_scheduler_gscheduler.a 00:04:33.748 SYMLINK libspdk_env_dpdk_rpc.so 00:04:33.748 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.748 CC module/accel/ioat/accel_ioat_rpc.o 00:04:33.748 CC module/keyring/file/keyring_rpc.o 00:04:33.748 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.748 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.748 LIB libspdk_scheduler_dynamic.a 00:04:33.748 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.748 CC module/accel/error/accel_error_rpc.o 00:04:33.748 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.748 SO libspdk_scheduler_dynamic.so.4.0 00:04:33.748 LIB libspdk_blob_bdev.a 00:04:33.748 LIB libspdk_keyring_linux.a 00:04:33.748 SO libspdk_blob_bdev.so.11.0 00:04:33.748 LIB libspdk_accel_ioat.a 00:04:33.748 SYMLINK libspdk_scheduler_dynamic.so 00:04:33.748 LIB libspdk_keyring_file.a 00:04:33.748 SO libspdk_keyring_linux.so.1.0 00:04:33.748 SO libspdk_accel_ioat.so.6.0 00:04:33.748 SO libspdk_keyring_file.so.1.0 00:04:34.007 SYMLINK libspdk_blob_bdev.so 00:04:34.007 SYMLINK libspdk_keyring_linux.so 00:04:34.007 SYMLINK libspdk_keyring_file.so 00:04:34.007 LIB libspdk_accel_error.a 00:04:34.007 CC module/accel/dsa/accel_dsa.o 00:04:34.007 CC module/accel/dsa/accel_dsa_rpc.o 00:04:34.007 SYMLINK libspdk_accel_ioat.so 00:04:34.007 CC module/accel/iaa/accel_iaa.o 00:04:34.007 CC module/accel/iaa/accel_iaa_rpc.o 00:04:34.007 SO libspdk_accel_error.so.2.0 00:04:34.007 SYMLINK libspdk_accel_error.so 00:04:34.266 CC module/bdev/delay/vbdev_delay.o 00:04:34.266 LIB libspdk_accel_iaa.a 00:04:34.266 CC module/blobfs/bdev/blobfs_bdev.o 00:04:34.266 CC module/bdev/error/vbdev_error.o 00:04:34.266 CC module/bdev/lvol/vbdev_lvol.o 00:04:34.266 CC module/bdev/gpt/gpt.o 00:04:34.266 SO libspdk_accel_iaa.so.3.0 00:04:34.266 LIB libspdk_accel_dsa.a 00:04:34.266 LIB libspdk_sock_posix.a 00:04:34.266 SO libspdk_accel_dsa.so.5.0 00:04:34.266 CC module/bdev/null/bdev_null.o 00:04:34.266 SO libspdk_sock_posix.so.6.0 00:04:34.266 SYMLINK libspdk_accel_iaa.so 00:04:34.266 CC module/bdev/gpt/vbdev_gpt.o 00:04:34.266 CC module/bdev/malloc/bdev_malloc.o 00:04:34.266 SYMLINK libspdk_accel_dsa.so 00:04:34.266 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:34.266 SYMLINK libspdk_sock_posix.so 00:04:34.266 CC module/bdev/null/bdev_null_rpc.o 00:04:34.266 CC module/bdev/error/vbdev_error_rpc.o 00:04:34.266 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:34.524 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:34.524 LIB libspdk_blobfs_bdev.a 00:04:34.524 LIB libspdk_bdev_error.a 00:04:34.524 LIB libspdk_bdev_null.a 00:04:34.524 SO libspdk_blobfs_bdev.so.6.0 00:04:34.524 LIB libspdk_bdev_gpt.a 00:04:34.524 LIB libspdk_bdev_delay.a 00:04:34.524 SO libspdk_bdev_error.so.6.0 00:04:34.524 SO libspdk_bdev_null.so.6.0 00:04:34.524 SO libspdk_bdev_gpt.so.6.0 00:04:34.524 SO libspdk_bdev_delay.so.6.0 00:04:34.524 SYMLINK libspdk_blobfs_bdev.so 00:04:34.524 SYMLINK libspdk_bdev_error.so 00:04:34.524 SYMLINK libspdk_bdev_null.so 00:04:34.524 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:34.524 SYMLINK libspdk_bdev_gpt.so 00:04:34.525 LIB libspdk_bdev_malloc.a 00:04:34.525 SYMLINK libspdk_bdev_delay.so 00:04:34.784 CC module/bdev/nvme/bdev_nvme.o 00:04:34.784 SO libspdk_bdev_malloc.so.6.0 00:04:34.784 CC module/bdev/passthru/vbdev_passthru.o 00:04:34.784 SYMLINK libspdk_bdev_malloc.so 00:04:34.784 CC module/bdev/raid/bdev_raid.o 00:04:34.784 CC module/bdev/split/vbdev_split.o 00:04:34.784 CC module/bdev/aio/bdev_aio.o 00:04:34.784 CC module/bdev/ftl/bdev_ftl.o 00:04:34.784 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.784 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.784 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:35.041 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:35.041 LIB libspdk_bdev_lvol.a 00:04:35.041 SO libspdk_bdev_lvol.so.6.0 00:04:35.041 CC module/bdev/split/vbdev_split_rpc.o 00:04:35.041 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:35.041 SYMLINK libspdk_bdev_lvol.so 00:04:35.041 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:35.041 CC module/bdev/aio/bdev_aio_rpc.o 00:04:35.041 LIB libspdk_bdev_passthru.a 00:04:35.041 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:35.041 SO libspdk_bdev_passthru.so.6.0 00:04:35.041 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:35.298 LIB libspdk_bdev_split.a 00:04:35.298 SO libspdk_bdev_split.so.6.0 00:04:35.298 SYMLINK libspdk_bdev_passthru.so 00:04:35.298 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:35.298 LIB libspdk_bdev_aio.a 00:04:35.298 LIB libspdk_bdev_ftl.a 00:04:35.298 SYMLINK libspdk_bdev_split.so 00:04:35.298 CC module/bdev/nvme/nvme_rpc.o 00:04:35.298 LIB libspdk_bdev_zone_block.a 00:04:35.298 SO libspdk_bdev_aio.so.6.0 00:04:35.298 SO libspdk_bdev_ftl.so.6.0 00:04:35.298 LIB libspdk_bdev_iscsi.a 00:04:35.298 SO libspdk_bdev_zone_block.so.6.0 00:04:35.298 CC module/bdev/nvme/bdev_mdns_client.o 00:04:35.298 SO libspdk_bdev_iscsi.so.6.0 00:04:35.298 SYMLINK libspdk_bdev_aio.so 00:04:35.298 SYMLINK libspdk_bdev_ftl.so 00:04:35.298 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:35.298 SYMLINK libspdk_bdev_zone_block.so 00:04:35.298 CC module/bdev/raid/bdev_raid_rpc.o 00:04:35.298 CC module/bdev/nvme/vbdev_opal.o 00:04:35.298 CC module/bdev/raid/bdev_raid_sb.o 00:04:35.298 SYMLINK libspdk_bdev_iscsi.so 00:04:35.298 CC module/bdev/raid/raid0.o 00:04:35.556 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:35.556 LIB libspdk_bdev_virtio.a 00:04:35.556 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:35.556 SO libspdk_bdev_virtio.so.6.0 00:04:35.556 CC module/bdev/raid/raid1.o 00:04:35.556 CC module/bdev/raid/concat.o 00:04:35.815 SYMLINK libspdk_bdev_virtio.so 00:04:35.815 LIB libspdk_bdev_raid.a 00:04:36.073 SO libspdk_bdev_raid.so.6.0 00:04:36.073 SYMLINK libspdk_bdev_raid.so 00:04:36.641 LIB libspdk_bdev_nvme.a 00:04:36.900 SO libspdk_bdev_nvme.so.7.0 00:04:36.900 SYMLINK libspdk_bdev_nvme.so 00:04:37.467 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:37.467 CC module/event/subsystems/vmd/vmd.o 00:04:37.467 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:37.467 CC module/event/subsystems/sock/sock.o 00:04:37.467 CC module/event/subsystems/scheduler/scheduler.o 00:04:37.467 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:37.468 CC module/event/subsystems/iobuf/iobuf.o 00:04:37.726 CC module/event/subsystems/keyring/keyring.o 00:04:37.726 LIB libspdk_event_vhost_blk.a 00:04:37.726 LIB libspdk_event_vmd.a 00:04:37.726 SO libspdk_event_vhost_blk.so.3.0 00:04:37.726 SO libspdk_event_vmd.so.6.0 00:04:37.726 LIB libspdk_event_iobuf.a 00:04:37.726 LIB libspdk_event_scheduler.a 00:04:37.726 LIB libspdk_event_sock.a 00:04:37.726 LIB libspdk_event_keyring.a 00:04:37.726 SO libspdk_event_iobuf.so.3.0 00:04:37.726 SO libspdk_event_scheduler.so.4.0 00:04:37.726 SYMLINK libspdk_event_vhost_blk.so 00:04:37.726 SYMLINK libspdk_event_vmd.so 00:04:37.726 SO libspdk_event_sock.so.5.0 00:04:37.726 SO libspdk_event_keyring.so.1.0 00:04:37.985 SYMLINK libspdk_event_sock.so 00:04:37.985 SYMLINK libspdk_event_scheduler.so 00:04:37.985 SYMLINK libspdk_event_iobuf.so 00:04:37.985 SYMLINK libspdk_event_keyring.so 00:04:38.243 CC module/event/subsystems/accel/accel.o 00:04:38.243 LIB libspdk_event_accel.a 00:04:38.502 SO libspdk_event_accel.so.6.0 00:04:38.502 SYMLINK libspdk_event_accel.so 00:04:38.760 CC module/event/subsystems/bdev/bdev.o 00:04:39.018 LIB libspdk_event_bdev.a 00:04:39.018 SO libspdk_event_bdev.so.6.0 00:04:39.018 SYMLINK libspdk_event_bdev.so 00:04:39.276 CC module/event/subsystems/nbd/nbd.o 00:04:39.276 CC module/event/subsystems/scsi/scsi.o 00:04:39.276 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.276 CC module/event/subsystems/ublk/ublk.o 00:04:39.276 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:39.535 LIB libspdk_event_ublk.a 00:04:39.535 LIB libspdk_event_scsi.a 00:04:39.535 SO libspdk_event_ublk.so.3.0 00:04:39.535 LIB libspdk_event_nbd.a 00:04:39.535 SO libspdk_event_scsi.so.6.0 00:04:39.535 SO libspdk_event_nbd.so.6.0 00:04:39.535 SYMLINK libspdk_event_ublk.so 00:04:39.535 LIB libspdk_event_nvmf.a 00:04:39.535 SYMLINK libspdk_event_scsi.so 00:04:39.535 SO libspdk_event_nvmf.so.6.0 00:04:39.535 SYMLINK libspdk_event_nbd.so 00:04:39.797 SYMLINK libspdk_event_nvmf.so 00:04:39.797 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.797 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:40.056 LIB libspdk_event_vhost_scsi.a 00:04:40.056 LIB libspdk_event_iscsi.a 00:04:40.056 SO libspdk_event_vhost_scsi.so.3.0 00:04:40.056 SO libspdk_event_iscsi.so.6.0 00:04:40.316 SYMLINK libspdk_event_vhost_scsi.so 00:04:40.316 SYMLINK libspdk_event_iscsi.so 00:04:40.316 SO libspdk.so.6.0 00:04:40.574 SYMLINK libspdk.so 00:04:40.831 CC app/trace_record/trace_record.o 00:04:40.831 CXX app/trace/trace.o 00:04:40.831 CC app/iscsi_tgt/iscsi_tgt.o 00:04:40.831 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:40.831 CC app/nvmf_tgt/nvmf_main.o 00:04:40.831 CC app/spdk_tgt/spdk_tgt.o 00:04:40.831 CC examples/util/zipf/zipf.o 00:04:40.831 CC examples/ioat/perf/perf.o 00:04:40.831 CC test/thread/poller_perf/poller_perf.o 00:04:41.109 LINK iscsi_tgt 00:04:41.109 LINK spdk_trace_record 00:04:41.109 LINK nvmf_tgt 00:04:41.109 LINK zipf 00:04:41.109 LINK spdk_tgt 00:04:41.109 LINK interrupt_tgt 00:04:41.109 LINK poller_perf 00:04:41.109 LINK ioat_perf 00:04:41.109 LINK spdk_trace 00:04:41.368 CC app/spdk_lspci/spdk_lspci.o 00:04:41.368 CC app/spdk_nvme_identify/identify.o 00:04:41.368 CC app/spdk_nvme_perf/perf.o 00:04:41.368 CC app/spdk_top/spdk_top.o 00:04:41.368 CC app/spdk_nvme_discover/discovery_aer.o 00:04:41.368 CC examples/ioat/verify/verify.o 00:04:41.368 CC test/dma/test_dma/test_dma.o 00:04:41.368 LINK spdk_lspci 00:04:41.368 CC app/spdk_dd/spdk_dd.o 00:04:41.368 CC examples/thread/thread/thread_ex.o 00:04:41.626 LINK spdk_nvme_discover 00:04:41.626 LINK verify 00:04:41.626 LINK thread 00:04:41.884 LINK test_dma 00:04:41.884 CC test/app/bdev_svc/bdev_svc.o 00:04:41.884 LINK spdk_dd 00:04:41.884 CC app/fio/nvme/fio_plugin.o 00:04:42.143 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:42.143 LINK bdev_svc 00:04:42.143 LINK spdk_nvme_identify 00:04:42.143 LINK spdk_nvme_perf 00:04:42.143 TEST_HEADER include/spdk/accel.h 00:04:42.143 TEST_HEADER include/spdk/accel_module.h 00:04:42.143 TEST_HEADER include/spdk/assert.h 00:04:42.143 TEST_HEADER include/spdk/barrier.h 00:04:42.143 CC test/app/histogram_perf/histogram_perf.o 00:04:42.143 TEST_HEADER include/spdk/base64.h 00:04:42.143 TEST_HEADER include/spdk/bdev.h 00:04:42.143 TEST_HEADER include/spdk/bdev_module.h 00:04:42.143 TEST_HEADER include/spdk/bdev_zone.h 00:04:42.144 TEST_HEADER include/spdk/bit_array.h 00:04:42.144 TEST_HEADER include/spdk/bit_pool.h 00:04:42.144 TEST_HEADER include/spdk/blob_bdev.h 00:04:42.144 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:42.144 TEST_HEADER include/spdk/blobfs.h 00:04:42.144 TEST_HEADER include/spdk/blob.h 00:04:42.144 TEST_HEADER include/spdk/conf.h 00:04:42.144 TEST_HEADER include/spdk/config.h 00:04:42.144 TEST_HEADER include/spdk/cpuset.h 00:04:42.144 TEST_HEADER include/spdk/crc16.h 00:04:42.144 TEST_HEADER include/spdk/crc32.h 00:04:42.144 LINK spdk_top 00:04:42.144 TEST_HEADER include/spdk/crc64.h 00:04:42.144 TEST_HEADER include/spdk/dif.h 00:04:42.144 TEST_HEADER include/spdk/dma.h 00:04:42.144 TEST_HEADER include/spdk/endian.h 00:04:42.144 TEST_HEADER include/spdk/env_dpdk.h 00:04:42.144 TEST_HEADER include/spdk/env.h 00:04:42.144 TEST_HEADER include/spdk/event.h 00:04:42.144 TEST_HEADER include/spdk/fd_group.h 00:04:42.144 TEST_HEADER include/spdk/fd.h 00:04:42.144 TEST_HEADER include/spdk/file.h 00:04:42.144 TEST_HEADER include/spdk/ftl.h 00:04:42.144 TEST_HEADER include/spdk/gpt_spec.h 00:04:42.403 TEST_HEADER include/spdk/hexlify.h 00:04:42.403 TEST_HEADER include/spdk/histogram_data.h 00:04:42.403 TEST_HEADER include/spdk/idxd.h 00:04:42.403 TEST_HEADER include/spdk/idxd_spec.h 00:04:42.403 TEST_HEADER include/spdk/init.h 00:04:42.403 TEST_HEADER include/spdk/ioat.h 00:04:42.403 TEST_HEADER include/spdk/ioat_spec.h 00:04:42.403 TEST_HEADER include/spdk/iscsi_spec.h 00:04:42.403 TEST_HEADER include/spdk/json.h 00:04:42.403 TEST_HEADER include/spdk/jsonrpc.h 00:04:42.403 CC app/fio/bdev/fio_plugin.o 00:04:42.403 TEST_HEADER include/spdk/keyring.h 00:04:42.403 TEST_HEADER include/spdk/keyring_module.h 00:04:42.403 TEST_HEADER include/spdk/likely.h 00:04:42.403 TEST_HEADER include/spdk/log.h 00:04:42.403 TEST_HEADER include/spdk/lvol.h 00:04:42.403 TEST_HEADER include/spdk/memory.h 00:04:42.403 TEST_HEADER include/spdk/mmio.h 00:04:42.403 TEST_HEADER include/spdk/nbd.h 00:04:42.403 TEST_HEADER include/spdk/net.h 00:04:42.403 TEST_HEADER include/spdk/notify.h 00:04:42.403 TEST_HEADER include/spdk/nvme.h 00:04:42.403 TEST_HEADER include/spdk/nvme_intel.h 00:04:42.403 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:42.403 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:42.403 TEST_HEADER include/spdk/nvme_spec.h 00:04:42.403 TEST_HEADER include/spdk/nvme_zns.h 00:04:42.403 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:42.403 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:42.403 TEST_HEADER include/spdk/nvmf.h 00:04:42.403 TEST_HEADER include/spdk/nvmf_spec.h 00:04:42.403 LINK histogram_perf 00:04:42.403 TEST_HEADER include/spdk/nvmf_transport.h 00:04:42.403 TEST_HEADER include/spdk/opal.h 00:04:42.403 TEST_HEADER include/spdk/opal_spec.h 00:04:42.403 TEST_HEADER include/spdk/pci_ids.h 00:04:42.403 TEST_HEADER include/spdk/pipe.h 00:04:42.403 TEST_HEADER include/spdk/queue.h 00:04:42.403 TEST_HEADER include/spdk/reduce.h 00:04:42.403 TEST_HEADER include/spdk/rpc.h 00:04:42.403 TEST_HEADER include/spdk/scheduler.h 00:04:42.403 TEST_HEADER include/spdk/scsi.h 00:04:42.403 TEST_HEADER include/spdk/scsi_spec.h 00:04:42.403 TEST_HEADER include/spdk/sock.h 00:04:42.403 TEST_HEADER include/spdk/stdinc.h 00:04:42.403 TEST_HEADER include/spdk/string.h 00:04:42.403 TEST_HEADER include/spdk/thread.h 00:04:42.403 TEST_HEADER include/spdk/trace.h 00:04:42.403 TEST_HEADER include/spdk/trace_parser.h 00:04:42.403 TEST_HEADER include/spdk/tree.h 00:04:42.403 TEST_HEADER include/spdk/ublk.h 00:04:42.403 TEST_HEADER include/spdk/util.h 00:04:42.403 TEST_HEADER include/spdk/uuid.h 00:04:42.403 TEST_HEADER include/spdk/version.h 00:04:42.403 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:42.403 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:42.403 LINK nvme_fuzz 00:04:42.403 TEST_HEADER include/spdk/vhost.h 00:04:42.403 TEST_HEADER include/spdk/vmd.h 00:04:42.403 TEST_HEADER include/spdk/xor.h 00:04:42.403 TEST_HEADER include/spdk/zipf.h 00:04:42.403 CXX test/cpp_headers/accel.o 00:04:42.403 CC test/app/jsoncat/jsoncat.o 00:04:42.661 CXX test/cpp_headers/accel_module.o 00:04:42.661 LINK spdk_nvme 00:04:42.661 CC test/event/event_perf/event_perf.o 00:04:42.661 CC test/nvme/aer/aer.o 00:04:42.662 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.662 LINK jsoncat 00:04:42.662 CC test/rpc_client/rpc_client_test.o 00:04:42.662 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:42.662 CXX test/cpp_headers/assert.o 00:04:42.662 LINK event_perf 00:04:42.662 CC test/env/vtophys/vtophys.o 00:04:42.920 CXX test/cpp_headers/barrier.o 00:04:42.920 LINK spdk_bdev 00:04:42.920 LINK aer 00:04:42.920 LINK rpc_client_test 00:04:42.920 LINK vtophys 00:04:42.920 CXX test/cpp_headers/base64.o 00:04:42.920 CC test/event/reactor/reactor.o 00:04:43.178 CC app/vhost/vhost.o 00:04:43.178 CC test/nvme/reset/reset.o 00:04:43.178 CXX test/cpp_headers/bdev.o 00:04:43.178 CC test/event/reactor_perf/reactor_perf.o 00:04:43.178 LINK reactor 00:04:43.178 CC examples/sock/hello_world/hello_sock.o 00:04:43.178 LINK mem_callbacks 00:04:43.178 CC test/nvme/sgl/sgl.o 00:04:43.436 LINK vhost 00:04:43.436 LINK reactor_perf 00:04:43.436 CXX test/cpp_headers/bdev_module.o 00:04:43.436 CC test/nvme/e2edp/nvme_dp.o 00:04:43.436 LINK reset 00:04:43.436 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:43.436 LINK hello_sock 00:04:43.716 CXX test/cpp_headers/bdev_zone.o 00:04:43.716 LINK env_dpdk_post_init 00:04:43.716 CC test/event/app_repeat/app_repeat.o 00:04:43.716 LINK sgl 00:04:43.716 LINK nvme_dp 00:04:43.716 CC test/event/scheduler/scheduler.o 00:04:43.716 CXX test/cpp_headers/bit_array.o 00:04:43.974 LINK app_repeat 00:04:43.974 CC test/env/memory/memory_ut.o 00:04:43.974 CC test/accel/dif/dif.o 00:04:43.974 CC test/env/pci/pci_ut.o 00:04:43.974 CXX test/cpp_headers/bit_pool.o 00:04:43.974 CC test/blobfs/mkfs/mkfs.o 00:04:43.974 CC test/nvme/overhead/overhead.o 00:04:43.974 LINK scheduler 00:04:43.974 CXX test/cpp_headers/blob_bdev.o 00:04:44.233 LINK mkfs 00:04:44.233 CXX test/cpp_headers/blobfs_bdev.o 00:04:44.233 CC examples/vmd/lsvmd/lsvmd.o 00:04:44.233 LINK overhead 00:04:44.233 CC examples/vmd/led/led.o 00:04:44.233 LINK pci_ut 00:04:44.495 LINK dif 00:04:44.495 CXX test/cpp_headers/blobfs.o 00:04:44.495 LINK lsvmd 00:04:44.495 LINK led 00:04:44.495 LINK iscsi_fuzz 00:04:44.753 CC test/nvme/err_injection/err_injection.o 00:04:44.753 CXX test/cpp_headers/blob.o 00:04:44.753 CXX test/cpp_headers/conf.o 00:04:44.753 CC test/lvol/esnap/esnap.o 00:04:45.011 CXX test/cpp_headers/config.o 00:04:45.011 CC test/app/stub/stub.o 00:04:45.011 LINK err_injection 00:04:45.011 CXX test/cpp_headers/cpuset.o 00:04:45.011 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:45.011 CC examples/idxd/perf/perf.o 00:04:45.011 CC test/nvme/startup/startup.o 00:04:45.011 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:45.269 LINK stub 00:04:45.269 CXX test/cpp_headers/crc16.o 00:04:45.269 LINK memory_ut 00:04:45.269 CC test/bdev/bdevio/bdevio.o 00:04:45.269 LINK startup 00:04:45.526 CXX test/cpp_headers/crc32.o 00:04:45.526 CC test/nvme/reserve/reserve.o 00:04:45.526 LINK idxd_perf 00:04:45.526 LINK vhost_fuzz 00:04:45.785 CC test/nvme/simple_copy/simple_copy.o 00:04:45.785 CXX test/cpp_headers/crc64.o 00:04:45.785 CC examples/accel/perf/accel_perf.o 00:04:45.785 CC test/nvme/connect_stress/connect_stress.o 00:04:45.785 CXX test/cpp_headers/dif.o 00:04:45.785 LINK bdevio 00:04:45.785 LINK reserve 00:04:46.044 LINK simple_copy 00:04:46.044 LINK connect_stress 00:04:46.044 CC test/nvme/compliance/nvme_compliance.o 00:04:46.044 CC test/nvme/boot_partition/boot_partition.o 00:04:46.044 CXX test/cpp_headers/dma.o 00:04:46.044 CC test/nvme/fused_ordering/fused_ordering.o 00:04:46.044 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:46.044 CXX test/cpp_headers/endian.o 00:04:46.303 LINK accel_perf 00:04:46.303 LINK boot_partition 00:04:46.303 CC test/nvme/fdp/fdp.o 00:04:46.303 LINK nvme_compliance 00:04:46.303 CXX test/cpp_headers/env_dpdk.o 00:04:46.303 LINK fused_ordering 00:04:46.303 LINK doorbell_aers 00:04:46.303 CC examples/blob/hello_world/hello_blob.o 00:04:46.303 CXX test/cpp_headers/env.o 00:04:46.562 CXX test/cpp_headers/event.o 00:04:46.562 CC test/nvme/cuse/cuse.o 00:04:46.562 CXX test/cpp_headers/fd_group.o 00:04:46.562 LINK fdp 00:04:46.562 CC examples/blob/cli/blobcli.o 00:04:46.562 LINK hello_blob 00:04:46.562 CXX test/cpp_headers/fd.o 00:04:46.562 CC examples/nvme/hello_world/hello_world.o 00:04:46.882 CC examples/bdev/hello_world/hello_bdev.o 00:04:46.882 CXX test/cpp_headers/file.o 00:04:46.882 CC examples/bdev/bdevperf/bdevperf.o 00:04:46.882 CC examples/nvme/reconnect/reconnect.o 00:04:46.882 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:46.882 LINK hello_world 00:04:46.882 CXX test/cpp_headers/ftl.o 00:04:47.156 LINK hello_bdev 00:04:47.156 LINK blobcli 00:04:47.156 CXX test/cpp_headers/gpt_spec.o 00:04:47.156 CXX test/cpp_headers/hexlify.o 00:04:47.156 CC examples/nvme/arbitration/arbitration.o 00:04:47.156 LINK reconnect 00:04:47.415 CXX test/cpp_headers/histogram_data.o 00:04:47.415 CXX test/cpp_headers/idxd.o 00:04:47.415 LINK nvme_manage 00:04:47.415 CXX test/cpp_headers/idxd_spec.o 00:04:47.673 CC examples/nvme/hotplug/hotplug.o 00:04:47.674 LINK arbitration 00:04:47.674 CXX test/cpp_headers/init.o 00:04:47.674 LINK bdevperf 00:04:47.674 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:47.674 CC examples/nvme/abort/abort.o 00:04:47.674 LINK cuse 00:04:47.674 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:47.674 CXX test/cpp_headers/ioat.o 00:04:47.674 CXX test/cpp_headers/ioat_spec.o 00:04:47.674 LINK hotplug 00:04:47.932 CXX test/cpp_headers/iscsi_spec.o 00:04:47.932 CXX test/cpp_headers/json.o 00:04:47.932 LINK cmb_copy 00:04:47.932 CXX test/cpp_headers/jsonrpc.o 00:04:47.932 CXX test/cpp_headers/keyring.o 00:04:47.932 LINK pmr_persistence 00:04:47.932 CXX test/cpp_headers/keyring_module.o 00:04:47.932 CXX test/cpp_headers/likely.o 00:04:47.932 CXX test/cpp_headers/log.o 00:04:47.932 CXX test/cpp_headers/lvol.o 00:04:48.190 CXX test/cpp_headers/memory.o 00:04:48.190 CXX test/cpp_headers/mmio.o 00:04:48.190 CXX test/cpp_headers/nbd.o 00:04:48.190 LINK abort 00:04:48.190 CXX test/cpp_headers/net.o 00:04:48.190 CXX test/cpp_headers/notify.o 00:04:48.190 CXX test/cpp_headers/nvme.o 00:04:48.190 CXX test/cpp_headers/nvme_intel.o 00:04:48.190 CXX test/cpp_headers/nvme_ocssd.o 00:04:48.190 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:48.190 CXX test/cpp_headers/nvme_spec.o 00:04:48.190 CXX test/cpp_headers/nvme_zns.o 00:04:48.190 CXX test/cpp_headers/nvmf_cmd.o 00:04:48.190 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:48.190 CXX test/cpp_headers/nvmf.o 00:04:48.448 CXX test/cpp_headers/nvmf_spec.o 00:04:48.448 CXX test/cpp_headers/opal.o 00:04:48.448 CXX test/cpp_headers/nvmf_transport.o 00:04:48.448 CXX test/cpp_headers/opal_spec.o 00:04:48.448 CXX test/cpp_headers/pci_ids.o 00:04:48.448 CXX test/cpp_headers/pipe.o 00:04:48.448 CXX test/cpp_headers/queue.o 00:04:48.448 CXX test/cpp_headers/reduce.o 00:04:48.448 CXX test/cpp_headers/rpc.o 00:04:48.448 CC examples/nvmf/nvmf/nvmf.o 00:04:48.448 CXX test/cpp_headers/scheduler.o 00:04:48.448 CXX test/cpp_headers/scsi.o 00:04:48.448 CXX test/cpp_headers/scsi_spec.o 00:04:48.448 CXX test/cpp_headers/sock.o 00:04:48.705 CXX test/cpp_headers/stdinc.o 00:04:48.705 CXX test/cpp_headers/string.o 00:04:48.705 CXX test/cpp_headers/thread.o 00:04:48.705 CXX test/cpp_headers/trace.o 00:04:48.705 CXX test/cpp_headers/trace_parser.o 00:04:48.705 CXX test/cpp_headers/tree.o 00:04:48.705 CXX test/cpp_headers/ublk.o 00:04:48.705 CXX test/cpp_headers/util.o 00:04:48.705 CXX test/cpp_headers/uuid.o 00:04:48.705 CXX test/cpp_headers/version.o 00:04:48.705 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.705 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.705 CXX test/cpp_headers/vhost.o 00:04:48.705 CXX test/cpp_headers/vmd.o 00:04:48.705 LINK nvmf 00:04:48.964 CXX test/cpp_headers/xor.o 00:04:48.964 CXX test/cpp_headers/zipf.o 00:04:50.371 LINK esnap 00:04:50.371 00:04:50.371 real 1m4.992s 00:04:50.371 user 6m16.428s 00:04:50.371 sys 1m34.877s 00:04:50.371 07:19:23 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:50.371 ************************************ 00:04:50.371 END TEST make 00:04:50.371 ************************************ 00:04:50.371 07:19:23 make -- common/autotest_common.sh@10 -- $ set +x 00:04:50.629 07:19:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:50.629 07:19:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:50.629 07:19:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:50.630 07:19:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.630 07:19:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:50.630 07:19:23 -- pm/common@44 -- $ pid=5363 00:04:50.630 07:19:23 -- pm/common@50 -- $ kill -TERM 5363 00:04:50.630 07:19:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.630 07:19:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:50.630 07:19:23 -- pm/common@44 -- $ pid=5365 00:04:50.630 07:19:23 -- pm/common@50 -- $ kill -TERM 5365 00:04:50.630 07:19:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.630 07:19:23 -- nvmf/common.sh@7 -- # uname -s 00:04:50.630 07:19:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.630 07:19:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.630 07:19:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.630 07:19:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.630 07:19:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.630 07:19:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.630 07:19:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.630 07:19:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.630 07:19:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.630 07:19:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.630 07:19:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:04:50.630 07:19:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:04:50.630 07:19:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.630 07:19:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.630 07:19:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:50.630 07:19:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.630 07:19:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.630 07:19:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.630 07:19:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.630 07:19:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.630 07:19:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.630 07:19:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.630 07:19:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.630 07:19:23 -- paths/export.sh@5 -- # export PATH 00:04:50.630 07:19:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.630 07:19:23 -- nvmf/common.sh@47 -- # : 0 00:04:50.630 07:19:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:50.630 07:19:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:50.630 07:19:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.630 07:19:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.630 07:19:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.630 07:19:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:50.630 07:19:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:50.630 07:19:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:50.630 07:19:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:50.630 07:19:23 -- spdk/autotest.sh@32 -- # uname -s 00:04:50.630 07:19:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:50.630 07:19:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:50.630 07:19:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:50.630 07:19:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:50.630 07:19:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:50.630 07:19:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:50.630 07:19:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:50.630 07:19:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:50.630 07:19:23 -- spdk/autotest.sh@48 -- # udevadm_pid=54719 00:04:50.630 07:19:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:50.630 07:19:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:50.630 07:19:23 -- pm/common@17 -- # local monitor 00:04:50.630 07:19:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.630 07:19:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.630 07:19:23 -- pm/common@25 -- # sleep 1 00:04:50.630 07:19:23 -- pm/common@21 -- # date +%s 00:04:50.887 07:19:23 -- pm/common@21 -- # date +%s 00:04:50.887 07:19:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721891963 00:04:50.887 07:19:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721891963 00:04:50.887 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721891963_collect-vmstat.pm.log 00:04:50.887 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721891963_collect-cpu-load.pm.log 00:04:51.823 07:19:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:51.823 07:19:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:51.823 07:19:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.823 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:51.823 07:19:24 -- spdk/autotest.sh@59 -- # create_test_list 00:04:51.823 07:19:24 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:51.823 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:51.823 07:19:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:51.823 07:19:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:51.823 07:19:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:51.823 07:19:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:51.823 07:19:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:51.823 07:19:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:51.823 07:19:24 -- common/autotest_common.sh@1453 -- # uname 00:04:51.823 07:19:24 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:04:51.823 07:19:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:51.823 07:19:24 -- common/autotest_common.sh@1473 -- # uname 00:04:51.823 07:19:24 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:04:51.823 07:19:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:51.823 07:19:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:51.823 07:19:24 -- spdk/autotest.sh@72 -- # hash lcov 00:04:51.823 07:19:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:51.823 07:19:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:51.823 --rc lcov_branch_coverage=1 00:04:51.823 --rc lcov_function_coverage=1 00:04:51.823 --rc genhtml_branch_coverage=1 00:04:51.823 --rc genhtml_function_coverage=1 00:04:51.823 --rc genhtml_legend=1 00:04:51.823 --rc geninfo_all_blocks=1 00:04:51.823 ' 00:04:51.823 07:19:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:51.823 --rc lcov_branch_coverage=1 00:04:51.823 --rc lcov_function_coverage=1 00:04:51.823 --rc genhtml_branch_coverage=1 00:04:51.823 --rc genhtml_function_coverage=1 00:04:51.823 --rc genhtml_legend=1 00:04:51.823 --rc geninfo_all_blocks=1 00:04:51.823 ' 00:04:51.823 07:19:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:51.823 --rc lcov_branch_coverage=1 00:04:51.823 --rc lcov_function_coverage=1 00:04:51.823 --rc genhtml_branch_coverage=1 00:04:51.823 --rc genhtml_function_coverage=1 00:04:51.823 --rc genhtml_legend=1 00:04:51.823 --rc geninfo_all_blocks=1 00:04:51.823 --no-external' 00:04:51.823 07:19:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:51.823 --rc lcov_branch_coverage=1 00:04:51.823 --rc lcov_function_coverage=1 00:04:51.823 --rc genhtml_branch_coverage=1 00:04:51.823 --rc genhtml_function_coverage=1 00:04:51.823 --rc genhtml_legend=1 00:04:51.823 --rc geninfo_all_blocks=1 00:04:51.823 --no-external' 00:04:51.823 07:19:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:51.823 lcov: LCOV version 1.14 00:04:51.823 07:19:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:06.717 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:06.717 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:18.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:18.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:18.928 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:18.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:21.471 07:19:54 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:21.471 07:19:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.471 07:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:21.471 07:19:54 -- spdk/autotest.sh@91 -- # rm -f 00:05:21.471 07:19:54 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.433 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:22.433 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:22.433 07:19:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:22.433 07:19:54 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:05:22.433 07:19:54 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:05:22.433 07:19:54 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:05:22.433 07:19:54 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:22.433 07:19:54 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:22.433 07:19:54 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:05:22.433 07:19:54 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.433 07:19:54 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:22.433 07:19:54 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:22.433 07:19:54 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:22.433 07:19:54 -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:05:22.433 07:19:54 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:22.433 07:19:54 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:22.433 07:19:54 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:22.433 07:19:54 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:22.433 07:19:54 -- common/autotest_common.sh@1660 -- # local device=nvme1n2 00:05:22.433 07:19:54 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:22.433 07:19:54 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:22.433 07:19:54 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:22.434 07:19:54 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:22.434 07:19:54 -- common/autotest_common.sh@1660 -- # local device=nvme1n3 00:05:22.434 07:19:54 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:22.434 07:19:54 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:22.434 07:19:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:22.434 07:19:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.434 07:19:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:22.434 07:19:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:22.434 07:19:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:22.434 07:19:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:22.434 No valid GPT data, bailing 00:05:22.434 07:19:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.434 07:19:55 -- scripts/common.sh@391 -- # pt= 00:05:22.434 07:19:55 -- scripts/common.sh@392 -- # return 1 00:05:22.434 07:19:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:22.434 1+0 records in 00:05:22.434 1+0 records out 00:05:22.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481713 s, 218 MB/s 00:05:22.434 07:19:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.434 07:19:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:22.434 07:19:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:22.434 07:19:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:22.434 07:19:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:22.434 No valid GPT data, bailing 00:05:22.434 07:19:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:22.434 07:19:55 -- scripts/common.sh@391 -- # pt= 00:05:22.434 07:19:55 -- scripts/common.sh@392 -- # return 1 00:05:22.434 07:19:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:22.434 1+0 records in 00:05:22.434 1+0 records out 00:05:22.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067076 s, 156 MB/s 00:05:22.434 07:19:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.434 07:19:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:22.434 07:19:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:22.434 07:19:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:22.434 07:19:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:22.703 No valid GPT data, bailing 00:05:22.703 07:19:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:22.703 07:19:55 -- scripts/common.sh@391 -- # pt= 00:05:22.703 07:19:55 -- scripts/common.sh@392 -- # return 1 00:05:22.703 07:19:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:22.703 1+0 records in 00:05:22.703 1+0 records out 00:05:22.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004545 s, 231 MB/s 00:05:22.703 07:19:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.703 07:19:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:22.703 07:19:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:22.703 07:19:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:22.703 07:19:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:22.703 No valid GPT data, bailing 00:05:22.703 07:19:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:22.703 07:19:55 -- scripts/common.sh@391 -- # pt= 00:05:22.703 07:19:55 -- scripts/common.sh@392 -- # return 1 00:05:22.703 07:19:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:22.703 1+0 records in 00:05:22.703 1+0 records out 00:05:22.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621519 s, 169 MB/s 00:05:22.703 07:19:55 -- spdk/autotest.sh@118 -- # sync 00:05:22.703 07:19:55 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:22.703 07:19:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:22.703 07:19:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:25.304 07:19:57 -- spdk/autotest.sh@124 -- # uname -s 00:05:25.304 07:19:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:25.304 07:19:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:25.304 07:19:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.304 07:19:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.304 07:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:25.304 ************************************ 00:05:25.304 START TEST setup.sh 00:05:25.304 ************************************ 00:05:25.304 07:19:57 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:25.304 * Looking for test storage... 00:05:25.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.304 07:19:57 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:25.304 07:19:57 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:25.304 07:19:57 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:25.304 07:19:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.304 07:19:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.304 07:19:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.304 ************************************ 00:05:25.304 START TEST acl 00:05:25.304 ************************************ 00:05:25.304 07:19:57 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:25.304 * Looking for test storage... 00:05:25.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.563 07:19:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.563 07:19:58 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n2 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n3 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:25.564 07:19:58 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:25.564 07:19:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:25.564 07:19:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:25.564 07:19:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:25.564 07:19:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:25.564 07:19:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:25.564 07:19:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.564 07:19:58 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.501 07:19:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:26.501 07:19:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:26.501 07:19:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.501 07:19:58 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:26.501 07:19:58 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.501 07:19:58 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.069 Hugepages 00:05:27.069 node hugesize free / total 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.069 00:05:27.069 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:27.069 07:19:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.329 07:19:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:27.329 07:19:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:27.329 07:19:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:27.329 07:19:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:27.329 07:19:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:27.329 07:19:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:27.329 07:20:00 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:27.329 07:20:00 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.329 07:20:00 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.329 07:20:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:27.329 ************************************ 00:05:27.329 START TEST denied 00:05:27.329 ************************************ 00:05:27.329 07:20:00 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:27.329 07:20:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:27.329 07:20:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:27.329 07:20:00 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:27.329 07:20:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.329 07:20:00 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.705 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.705 07:20:01 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.272 00:05:29.272 real 0m1.734s 00:05:29.272 user 0m0.637s 00:05:29.272 sys 0m1.054s 00:05:29.272 07:20:01 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.272 07:20:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:29.272 ************************************ 00:05:29.272 END TEST denied 00:05:29.272 ************************************ 00:05:29.272 07:20:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:29.272 07:20:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.272 07:20:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.272 07:20:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:29.272 ************************************ 00:05:29.272 START TEST allowed 00:05:29.272 ************************************ 00:05:29.272 07:20:01 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:29.272 07:20:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:29.272 07:20:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:29.272 07:20:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:29.272 07:20:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.272 07:20:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.208 07:20:02 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.158 00:05:31.158 real 0m1.889s 00:05:31.158 user 0m0.770s 00:05:31.158 sys 0m1.140s 00:05:31.158 07:20:03 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.158 07:20:03 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:31.158 ************************************ 00:05:31.158 END TEST allowed 00:05:31.158 ************************************ 00:05:31.158 00:05:31.158 real 0m5.821s 00:05:31.158 user 0m2.318s 00:05:31.158 sys 0m3.505s 00:05:31.158 07:20:03 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.158 07:20:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:31.158 ************************************ 00:05:31.158 END TEST acl 00:05:31.158 ************************************ 00:05:31.158 07:20:03 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:31.158 07:20:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.158 07:20:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.158 07:20:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.158 ************************************ 00:05:31.158 START TEST hugepages 00:05:31.158 ************************************ 00:05:31.158 07:20:03 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:31.419 * Looking for test storage... 00:05:31.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5884032 kB' 'MemAvailable: 7407776 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 486052 kB' 'Inactive: 1361264 kB' 'Active(anon): 120200 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 111172 kB' 'Mapped: 51596 kB' 'Shmem: 10488 kB' 'KReclaimable: 67136 kB' 'Slab: 142984 kB' 'SReclaimable: 67136 kB' 'SUnreclaim: 75848 kB' 'KernelStack: 6440 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 341096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.419 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.420 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:31.421 07:20:03 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:31.421 07:20:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.421 07:20:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.421 07:20:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:31.421 ************************************ 00:05:31.421 START TEST default_setup 00:05:31.421 ************************************ 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.421 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.363 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.363 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970872 kB' 'MemAvailable: 9494456 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498444 kB' 'Inactive: 1361280 kB' 'Active(anon): 132592 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123684 kB' 'Mapped: 51464 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142644 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75864 kB' 'KernelStack: 6352 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.363 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.364 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970872 kB' 'MemAvailable: 9494456 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498268 kB' 'Inactive: 1361280 kB' 'Active(anon): 132416 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123588 kB' 'Mapped: 51348 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142612 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75832 kB' 'KernelStack: 6384 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.365 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.366 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970872 kB' 'MemAvailable: 9494456 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498084 kB' 'Inactive: 1361280 kB' 'Active(anon): 132232 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123392 kB' 'Mapped: 51348 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142600 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75820 kB' 'KernelStack: 6368 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.367 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.368 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:32.369 nr_hugepages=1024 00:05:32.369 resv_hugepages=0 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.369 surplus_hugepages=0 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.369 anon_hugepages=0 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970872 kB' 'MemAvailable: 9494464 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 497928 kB' 'Inactive: 1361288 kB' 'Active(anon): 132076 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123196 kB' 'Mapped: 51348 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142584 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75804 kB' 'KernelStack: 6352 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.369 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.370 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.631 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970872 kB' 'MemUsed: 4271108 kB' 'SwapCached: 0 kB' 'Active: 498244 kB' 'Inactive: 1361288 kB' 'Active(anon): 132392 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1737604 kB' 'Mapped: 51348 kB' 'AnonPages: 123544 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66780 kB' 'Slab: 142580 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.632 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:32.633 node0=1024 expecting 1024 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:32.633 00:05:32.633 real 0m1.120s 00:05:32.633 user 0m0.496s 00:05:32.633 sys 0m0.579s 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.633 07:20:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:32.633 ************************************ 00:05:32.633 END TEST default_setup 00:05:32.633 ************************************ 00:05:32.633 07:20:05 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:32.633 07:20:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.633 07:20:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.633 07:20:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.633 ************************************ 00:05:32.633 START TEST per_node_1G_alloc 00:05:32.633 ************************************ 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.633 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.206 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.206 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018220 kB' 'MemAvailable: 10541812 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498664 kB' 'Inactive: 1361288 kB' 'Active(anon): 132812 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123680 kB' 'Mapped: 51476 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142576 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6356 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.206 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.207 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018560 kB' 'MemAvailable: 10542152 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498100 kB' 'Inactive: 1361288 kB' 'Active(anon): 132248 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123368 kB' 'Mapped: 51348 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142572 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6384 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.208 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.209 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018820 kB' 'MemAvailable: 10542412 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498360 kB' 'Inactive: 1361288 kB' 'Active(anon): 132508 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123628 kB' 'Mapped: 51348 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142572 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6384 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.210 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.211 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:33.212 nr_hugepages=512 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:33.212 resv_hugepages=0 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:33.212 surplus_hugepages=0 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:33.212 anon_hugepages=0 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018832 kB' 'MemAvailable: 10542424 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498428 kB' 'Inactive: 1361288 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123692 kB' 'Mapped: 51348 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142572 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6336 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018832 kB' 'MemUsed: 3223148 kB' 'SwapCached: 0 kB' 'Active: 498368 kB' 'Inactive: 1361288 kB' 'Active(anon): 132516 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1737604 kB' 'Mapped: 51348 kB' 'AnonPages: 123632 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66780 kB' 'Slab: 142572 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.215 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:33.216 node0=512 expecting 512 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:33.216 00:05:33.216 real 0m0.698s 00:05:33.216 user 0m0.336s 00:05:33.216 sys 0m0.406s 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.216 07:20:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:33.216 ************************************ 00:05:33.216 END TEST per_node_1G_alloc 00:05:33.216 ************************************ 00:05:33.216 07:20:05 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:33.216 07:20:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.216 07:20:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.216 07:20:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:33.475 ************************************ 00:05:33.475 START TEST even_2G_alloc 00:05:33.475 ************************************ 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.475 07:20:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.734 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.734 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7967916 kB' 'MemAvailable: 9491508 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498316 kB' 'Inactive: 1361288 kB' 'Active(anon): 132464 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 123584 kB' 'Mapped: 51440 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142540 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75760 kB' 'KernelStack: 6400 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.999 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.000 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7967664 kB' 'MemAvailable: 9491256 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498256 kB' 'Inactive: 1361288 kB' 'Active(anon): 132404 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 123528 kB' 'Mapped: 51352 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142536 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75756 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.001 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.002 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7967664 kB' 'MemAvailable: 9491256 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498300 kB' 'Inactive: 1361288 kB' 'Active(anon): 132448 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 123568 kB' 'Mapped: 51352 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142532 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75752 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.003 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.004 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:34.005 nr_hugepages=1024 00:05:34.005 resv_hugepages=0 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.005 surplus_hugepages=0 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.005 anon_hugepages=0 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7967976 kB' 'MemAvailable: 9491568 kB' 'Buffers: 2436 kB' 'Cached: 1735168 kB' 'SwapCached: 0 kB' 'Active: 498000 kB' 'Inactive: 1361288 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 123528 kB' 'Mapped: 51352 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142532 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75752 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.005 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.006 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7967976 kB' 'MemUsed: 4274004 kB' 'SwapCached: 0 kB' 'Active: 498304 kB' 'Inactive: 1361288 kB' 'Active(anon): 132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 1737604 kB' 'Mapped: 51352 kB' 'AnonPages: 123576 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66780 kB' 'Slab: 142528 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.007 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.008 node0=1024 expecting 1024 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:34.008 00:05:34.008 real 0m0.707s 00:05:34.008 user 0m0.343s 00:05:34.008 sys 0m0.404s 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.008 07:20:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:34.008 ************************************ 00:05:34.008 END TEST even_2G_alloc 00:05:34.008 ************************************ 00:05:34.008 07:20:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:34.008 07:20:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.008 07:20:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.008 07:20:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:34.008 ************************************ 00:05:34.008 START TEST odd_alloc 00:05:34.008 ************************************ 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:34.008 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.009 07:20:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.579 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.579 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7971152 kB' 'MemAvailable: 9494748 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498308 kB' 'Inactive: 1361292 kB' 'Active(anon): 132456 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123608 kB' 'Mapped: 51440 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142576 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6400 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.579 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.580 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7971776 kB' 'MemAvailable: 9495372 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498232 kB' 'Inactive: 1361292 kB' 'Active(anon): 132380 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123572 kB' 'Mapped: 51440 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142576 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6384 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.581 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.582 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7972036 kB' 'MemAvailable: 9495632 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498492 kB' 'Inactive: 1361292 kB' 'Active(anon): 132640 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123572 kB' 'Mapped: 51440 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142576 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6384 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.583 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.584 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:34.845 nr_hugepages=1025 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:34.845 resv_hugepages=0 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.845 surplus_hugepages=0 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.845 anon_hugepages=0 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7972136 kB' 'MemAvailable: 9495732 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498252 kB' 'Inactive: 1361292 kB' 'Active(anon): 132400 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123552 kB' 'Mapped: 51352 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142568 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75788 kB' 'KernelStack: 6368 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.845 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.846 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7972136 kB' 'MemUsed: 4269844 kB' 'SwapCached: 0 kB' 'Active: 498376 kB' 'Inactive: 1361292 kB' 'Active(anon): 132524 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1737608 kB' 'Mapped: 51352 kB' 'AnonPages: 123680 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66780 kB' 'Slab: 142568 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.847 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.848 node0=1025 expecting 1025 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:34.848 00:05:34.848 real 0m0.670s 00:05:34.848 user 0m0.305s 00:05:34.848 sys 0m0.402s 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.848 07:20:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:34.848 ************************************ 00:05:34.848 END TEST odd_alloc 00:05:34.848 ************************************ 00:05:34.848 07:20:07 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:34.848 07:20:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.848 07:20:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.848 07:20:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:34.848 ************************************ 00:05:34.848 START TEST custom_alloc 00:05:34.848 ************************************ 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:34.848 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.849 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.424 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.424 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9019824 kB' 'MemAvailable: 10543420 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498436 kB' 'Inactive: 1361292 kB' 'Active(anon): 132584 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 123692 kB' 'Mapped: 51480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142576 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6392 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9019824 kB' 'MemAvailable: 10543420 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498284 kB' 'Inactive: 1361292 kB' 'Active(anon): 132432 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 123548 kB' 'Mapped: 51352 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142568 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75788 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9019824 kB' 'MemAvailable: 10543420 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498268 kB' 'Inactive: 1361292 kB' 'Active(anon): 132416 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 123520 kB' 'Mapped: 51352 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142564 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75784 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:35.429 nr_hugepages=512 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:35.429 resv_hugepages=0 00:05:35.429 surplus_hugepages=0 00:05:35.429 anon_hugepages=0 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9019824 kB' 'MemAvailable: 10543420 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 498040 kB' 'Inactive: 1361292 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 123548 kB' 'Mapped: 51352 kB' 'Shmem: 10464 kB' 'KReclaimable: 66780 kB' 'Slab: 142552 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75772 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9019824 kB' 'MemUsed: 3222156 kB' 'SwapCached: 0 kB' 'Active: 498288 kB' 'Inactive: 1361292 kB' 'Active(anon): 132436 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 1737608 kB' 'Mapped: 51352 kB' 'AnonPages: 123548 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66780 kB' 'Slab: 142552 kB' 'SReclaimable: 66780 kB' 'SUnreclaim: 75772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:35.433 node0=512 expecting 512 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:35.433 00:05:35.433 real 0m0.690s 00:05:35.433 user 0m0.342s 00:05:35.433 sys 0m0.378s 00:05:35.433 ************************************ 00:05:35.433 END TEST custom_alloc 00:05:35.433 ************************************ 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.433 07:20:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:35.693 07:20:08 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:35.693 07:20:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.693 07:20:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.693 07:20:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:35.693 ************************************ 00:05:35.693 START TEST no_shrink_alloc 00:05:35.693 ************************************ 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.693 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.952 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.952 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7984280 kB' 'MemAvailable: 9507872 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 493108 kB' 'Inactive: 1361292 kB' 'Active(anon): 127256 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118380 kB' 'Mapped: 50736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142036 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75264 kB' 'KernelStack: 6240 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.217 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.218 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7984344 kB' 'MemAvailable: 9507936 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 492996 kB' 'Inactive: 1361292 kB' 'Active(anon): 127144 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118260 kB' 'Mapped: 50612 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142032 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75260 kB' 'KernelStack: 6256 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.219 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.220 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7984120 kB' 'MemAvailable: 9507712 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 492892 kB' 'Inactive: 1361292 kB' 'Active(anon): 127040 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118144 kB' 'Mapped: 50612 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142032 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75260 kB' 'KernelStack: 6240 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.221 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.223 nr_hugepages=1024 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.223 resv_hugepages=0 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.223 surplus_hugepages=0 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.223 anon_hugepages=0 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7984120 kB' 'MemAvailable: 9507712 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 492900 kB' 'Inactive: 1361292 kB' 'Active(anon): 127048 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118144 kB' 'Mapped: 50612 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142032 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75260 kB' 'KernelStack: 6240 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.225 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7984120 kB' 'MemUsed: 4257860 kB' 'SwapCached: 0 kB' 'Active: 492888 kB' 'Inactive: 1361292 kB' 'Active(anon): 127036 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1737608 kB' 'Mapped: 50612 kB' 'AnonPages: 118144 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66772 kB' 'Slab: 142032 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.226 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.227 node0=1024 expecting 1024 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.227 07:20:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.803 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.803 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.803 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981880 kB' 'MemAvailable: 9505472 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 493140 kB' 'Inactive: 1361292 kB' 'Active(anon): 127288 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118416 kB' 'Mapped: 50728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142020 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75248 kB' 'KernelStack: 6244 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.803 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.804 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981880 kB' 'MemAvailable: 9505472 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 492860 kB' 'Inactive: 1361292 kB' 'Active(anon): 127008 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118164 kB' 'Mapped: 50612 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142016 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75244 kB' 'KernelStack: 6240 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.805 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.806 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981880 kB' 'MemAvailable: 9505472 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 492872 kB' 'Inactive: 1361292 kB' 'Active(anon): 127020 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118164 kB' 'Mapped: 50612 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142016 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75244 kB' 'KernelStack: 6240 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.807 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.808 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:36.809 nr_hugepages=1024 00:05:36.809 resv_hugepages=0 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.809 surplus_hugepages=0 00:05:36.809 anon_hugepages=0 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981880 kB' 'MemAvailable: 9505472 kB' 'Buffers: 2436 kB' 'Cached: 1735172 kB' 'SwapCached: 0 kB' 'Active: 492768 kB' 'Inactive: 1361292 kB' 'Active(anon): 126916 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118412 kB' 'Mapped: 50612 kB' 'Shmem: 10464 kB' 'KReclaimable: 66772 kB' 'Slab: 142016 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75244 kB' 'KernelStack: 6256 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.809 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.810 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981880 kB' 'MemUsed: 4260100 kB' 'SwapCached: 0 kB' 'Active: 492840 kB' 'Inactive: 1361292 kB' 'Active(anon): 126988 kB' 'Inactive(anon): 0 kB' 'Active(file): 365852 kB' 'Inactive(file): 1361292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1737608 kB' 'Mapped: 50612 kB' 'AnonPages: 118168 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66772 kB' 'Slab: 142008 kB' 'SReclaimable: 66772 kB' 'SUnreclaim: 75236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.811 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.812 node0=1024 expecting 1024 00:05:36.812 ************************************ 00:05:36.812 END TEST no_shrink_alloc 00:05:36.812 ************************************ 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.812 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.813 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.813 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.813 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.813 07:20:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.813 00:05:36.813 real 0m1.347s 00:05:36.813 user 0m0.608s 00:05:36.813 sys 0m0.777s 00:05:36.813 07:20:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.813 07:20:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:37.073 07:20:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:37.073 00:05:37.073 real 0m5.770s 00:05:37.073 user 0m2.631s 00:05:37.073 sys 0m3.303s 00:05:37.073 07:20:09 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.073 07:20:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:37.073 ************************************ 00:05:37.073 END TEST hugepages 00:05:37.073 ************************************ 00:05:37.073 07:20:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:37.073 07:20:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.073 07:20:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.073 07:20:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:37.073 ************************************ 00:05:37.073 START TEST driver 00:05:37.073 ************************************ 00:05:37.073 07:20:09 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:37.073 * Looking for test storage... 00:05:37.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.073 07:20:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:37.073 07:20:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.073 07:20:09 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.009 07:20:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:38.009 07:20:10 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.009 07:20:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.009 07:20:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:38.009 ************************************ 00:05:38.009 START TEST guess_driver 00:05:38.009 ************************************ 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:38.009 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:38.009 Looking for driver=uio_pci_generic 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.009 07:20:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:38.579 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.836 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:38.836 07:20:11 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:38.836 07:20:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:38.836 07:20:11 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.402 00:05:39.402 real 0m1.444s 00:05:39.402 user 0m0.488s 00:05:39.402 sys 0m0.974s 00:05:39.402 ************************************ 00:05:39.402 END TEST guess_driver 00:05:39.402 ************************************ 00:05:39.402 07:20:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.402 07:20:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:39.402 ************************************ 00:05:39.402 END TEST driver 00:05:39.402 ************************************ 00:05:39.402 00:05:39.402 real 0m2.286s 00:05:39.402 user 0m0.807s 00:05:39.402 sys 0m1.590s 00:05:39.402 07:20:11 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.402 07:20:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:39.402 07:20:11 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:39.402 07:20:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.402 07:20:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.402 07:20:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:39.402 ************************************ 00:05:39.402 START TEST devices 00:05:39.402 ************************************ 00:05:39.402 07:20:11 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:39.402 * Looking for test storage... 00:05:39.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.402 07:20:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:39.402 07:20:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:39.402 07:20:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.402 07:20:12 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.334 07:20:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n2 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n3 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:40.334 07:20:12 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:40.334 07:20:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:40.334 07:20:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:40.335 No valid GPT data, bailing 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:40.335 No valid GPT data, bailing 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:40.335 No valid GPT data, bailing 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:40.335 07:20:12 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:40.335 07:20:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:40.335 07:20:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:40.335 No valid GPT data, bailing 00:05:40.335 07:20:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:40.335 07:20:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.335 07:20:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.335 07:20:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:40.335 07:20:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:40.335 07:20:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:40.335 07:20:13 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:40.335 07:20:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:40.335 07:20:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.335 07:20:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:40.335 07:20:13 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:40.335 07:20:13 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:40.335 07:20:13 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:40.335 07:20:13 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.335 07:20:13 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.335 07:20:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:40.335 ************************************ 00:05:40.335 START TEST nvme_mount 00:05:40.335 ************************************ 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:40.335 07:20:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:41.711 Creating new GPT entries in memory. 00:05:41.711 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:41.711 other utilities. 00:05:41.711 07:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:41.711 07:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.711 07:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.711 07:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.711 07:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:42.650 Creating new GPT entries in memory. 00:05:42.650 The operation has completed successfully. 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58957 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:42.650 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.651 07:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.909 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:43.168 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.168 07:20:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.427 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.427 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.427 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.427 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.427 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:43.427 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:43.427 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.427 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:43.427 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:43.427 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.428 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.687 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.687 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:43.687 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:43.687 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.687 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.687 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.946 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.946 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.946 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.946 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.205 07:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.464 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.464 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:44.464 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:44.464 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.464 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.464 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:44.724 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:44.724 00:05:44.724 real 0m4.377s 00:05:44.724 user 0m0.783s 00:05:44.724 sys 0m1.338s 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.724 07:20:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:44.724 ************************************ 00:05:44.724 END TEST nvme_mount 00:05:44.724 ************************************ 00:05:44.984 07:20:17 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:44.984 07:20:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.984 07:20:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.984 07:20:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:44.984 ************************************ 00:05:44.984 START TEST dm_mount 00:05:44.984 ************************************ 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:44.984 07:20:17 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:45.926 Creating new GPT entries in memory. 00:05:45.926 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:45.926 other utilities. 00:05:45.926 07:20:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:45.926 07:20:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.926 07:20:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.926 07:20:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.926 07:20:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:46.865 Creating new GPT entries in memory. 00:05:46.865 The operation has completed successfully. 00:05:46.865 07:20:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:46.865 07:20:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.865 07:20:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.865 07:20:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.865 07:20:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:48.242 The operation has completed successfully. 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59393 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.242 07:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.503 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.503 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.503 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.503 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.762 07:20:21 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.019 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.019 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:49.019 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:49.019 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.019 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.019 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:49.278 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:49.278 07:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:49.278 00:05:49.278 real 0m4.529s 00:05:49.278 user 0m0.596s 00:05:49.278 sys 0m0.912s 00:05:49.278 ************************************ 00:05:49.278 END TEST dm_mount 00:05:49.278 ************************************ 00:05:49.278 07:20:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.278 07:20:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:49.537 07:20:22 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:49.537 07:20:22 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:49.537 07:20:22 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.537 07:20:22 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.537 07:20:22 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:49.537 07:20:22 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.537 07:20:22 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.796 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.796 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.796 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:49.796 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:49.796 07:20:22 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:49.796 07:20:22 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.796 07:20:22 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:49.796 07:20:22 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.796 07:20:22 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:49.796 07:20:22 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.796 07:20:22 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:49.796 ************************************ 00:05:49.796 END TEST devices 00:05:49.796 ************************************ 00:05:49.796 00:05:49.796 real 0m10.404s 00:05:49.796 user 0m1.964s 00:05:49.796 sys 0m2.904s 00:05:49.796 07:20:22 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.796 07:20:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:49.796 00:05:49.796 real 0m24.620s 00:05:49.796 user 0m7.847s 00:05:49.796 sys 0m11.531s 00:05:49.796 07:20:22 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.796 07:20:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:49.796 ************************************ 00:05:49.796 END TEST setup.sh 00:05:49.796 ************************************ 00:05:49.796 07:20:22 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:50.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.733 Hugepages 00:05:50.733 node hugesize free / total 00:05:50.733 node0 1048576kB 0 / 0 00:05:50.733 node0 2048kB 2048 / 2048 00:05:50.733 00:05:50.733 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:50.733 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:50.733 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:50.992 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:50.992 07:20:23 -- spdk/autotest.sh@130 -- # uname -s 00:05:50.992 07:20:23 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:50.992 07:20:23 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:50.992 07:20:23 -- common/autotest_common.sh@1529 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:51.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.929 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.929 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.929 07:20:24 -- common/autotest_common.sh@1530 -- # sleep 1 00:05:52.876 07:20:25 -- common/autotest_common.sh@1531 -- # bdfs=() 00:05:52.876 07:20:25 -- common/autotest_common.sh@1531 -- # local bdfs 00:05:52.876 07:20:25 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:05:52.876 07:20:25 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:05:52.876 07:20:25 -- common/autotest_common.sh@1511 -- # bdfs=() 00:05:52.876 07:20:25 -- common/autotest_common.sh@1511 -- # local bdfs 00:05:52.876 07:20:25 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.876 07:20:25 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:52.876 07:20:25 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:05:53.135 07:20:25 -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:05:53.135 07:20:25 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:53.135 07:20:25 -- common/autotest_common.sh@1534 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:53.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.653 Waiting for block devices as requested 00:05:53.653 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:53.653 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:53.912 07:20:26 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:05:53.912 07:20:26 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1500 -- # grep 0000:00:10.0/nvme/nvme 00:05:53.912 07:20:26 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme1 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # grep oacs 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:05:53.912 07:20:26 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:05:53.912 07:20:26 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:05:53.912 07:20:26 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1555 -- # continue 00:05:53.912 07:20:26 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:05:53.912 07:20:26 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:53.912 07:20:26 -- common/autotest_common.sh@1500 -- # grep 0000:00:11.0/nvme/nvme 00:05:53.912 07:20:26 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # grep oacs 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:05:53.912 07:20:26 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:05:53.912 07:20:26 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:05:53.912 07:20:26 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:05:53.912 07:20:26 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:05:53.912 07:20:26 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:05:53.912 07:20:26 -- common/autotest_common.sh@1555 -- # continue 00:05:53.912 07:20:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:53.912 07:20:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.912 07:20:26 -- common/autotest_common.sh@10 -- # set +x 00:05:53.912 07:20:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:53.912 07:20:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.912 07:20:26 -- common/autotest_common.sh@10 -- # set +x 00:05:53.912 07:20:26 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:54.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.848 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.848 07:20:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:54.848 07:20:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:54.848 07:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:55.106 07:20:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:55.106 07:20:27 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:05:55.106 07:20:27 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:05:55.106 07:20:27 -- common/autotest_common.sh@1575 -- # bdfs=() 00:05:55.106 07:20:27 -- common/autotest_common.sh@1575 -- # local bdfs 00:05:55.106 07:20:27 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:05:55.106 07:20:27 -- common/autotest_common.sh@1511 -- # bdfs=() 00:05:55.106 07:20:27 -- common/autotest_common.sh@1511 -- # local bdfs 00:05:55.106 07:20:27 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:55.106 07:20:27 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:55.106 07:20:27 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:05:55.106 07:20:27 -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:05:55.106 07:20:27 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:55.106 07:20:27 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:05:55.106 07:20:27 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:55.106 07:20:27 -- common/autotest_common.sh@1578 -- # device=0x0010 00:05:55.106 07:20:27 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:55.106 07:20:27 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:05:55.106 07:20:27 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:55.106 07:20:27 -- common/autotest_common.sh@1578 -- # device=0x0010 00:05:55.107 07:20:27 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:55.107 07:20:27 -- common/autotest_common.sh@1584 -- # printf '%s\n' 00:05:55.107 07:20:27 -- common/autotest_common.sh@1590 -- # [[ -z '' ]] 00:05:55.107 07:20:27 -- common/autotest_common.sh@1591 -- # return 0 00:05:55.107 07:20:27 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:55.107 07:20:27 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:55.107 07:20:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:55.107 07:20:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:55.107 07:20:27 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:55.107 07:20:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.107 07:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:55.107 07:20:27 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:55.107 07:20:27 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:55.107 07:20:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.107 07:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.107 07:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:55.107 ************************************ 00:05:55.107 START TEST env 00:05:55.107 ************************************ 00:05:55.107 07:20:27 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:55.107 * Looking for test storage... 00:05:55.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:55.366 07:20:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:55.366 07:20:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.366 07:20:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.366 07:20:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.366 ************************************ 00:05:55.366 START TEST env_memory 00:05:55.366 ************************************ 00:05:55.366 07:20:27 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:55.366 00:05:55.366 00:05:55.366 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.366 http://cunit.sourceforge.net/ 00:05:55.366 00:05:55.366 00:05:55.366 Suite: memory 00:05:55.366 Test: alloc and free memory map ...[2024-07-25 07:20:27.916553] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:55.366 passed 00:05:55.366 Test: mem map translation ...[2024-07-25 07:20:27.939540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:55.366 [2024-07-25 07:20:27.939583] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:55.366 [2024-07-25 07:20:27.939624] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:55.366 [2024-07-25 07:20:27.939631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:55.366 passed 00:05:55.366 Test: mem map registration ...[2024-07-25 07:20:27.983987] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:55.366 [2024-07-25 07:20:27.984036] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:55.366 passed 00:05:55.366 Test: mem map adjacent registrations ...passed 00:05:55.366 00:05:55.366 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.366 suites 1 1 n/a 0 0 00:05:55.366 tests 4 4 4 0 0 00:05:55.366 asserts 152 152 152 0 n/a 00:05:55.366 00:05:55.366 Elapsed time = 0.159 seconds 00:05:55.366 00:05:55.366 real 0m0.185s 00:05:55.366 user 0m0.164s 00:05:55.366 sys 0m0.016s 00:05:55.366 07:20:28 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.366 07:20:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:55.366 ************************************ 00:05:55.366 END TEST env_memory 00:05:55.366 ************************************ 00:05:55.366 07:20:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:55.366 07:20:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.366 07:20:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.366 07:20:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.624 ************************************ 00:05:55.624 START TEST env_vtophys 00:05:55.624 ************************************ 00:05:55.624 07:20:28 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:55.624 EAL: lib.eal log level changed from notice to debug 00:05:55.624 EAL: Detected lcore 0 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 1 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 2 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 3 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 4 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 5 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 6 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 7 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 8 as core 0 on socket 0 00:05:55.624 EAL: Detected lcore 9 as core 0 on socket 0 00:05:55.624 EAL: Maximum logical cores by configuration: 128 00:05:55.624 EAL: Detected CPU lcores: 10 00:05:55.624 EAL: Detected NUMA nodes: 1 00:05:55.624 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:55.624 EAL: Detected shared linkage of DPDK 00:05:55.624 EAL: No shared files mode enabled, IPC will be disabled 00:05:55.624 EAL: Selected IOVA mode 'PA' 00:05:55.624 EAL: Probing VFIO support... 00:05:55.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:55.624 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:55.624 EAL: Ask a virtual area of 0x2e000 bytes 00:05:55.624 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:55.624 EAL: Setting up physically contiguous memory... 00:05:55.624 EAL: Setting maximum number of open files to 524288 00:05:55.624 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:55.624 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:55.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.624 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:55.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.624 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:55.624 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:55.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.624 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:55.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.624 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:55.624 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:55.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.624 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:55.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.624 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:55.624 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:55.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:55.624 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:55.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:55.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:55.624 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:55.624 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:55.624 EAL: Hugepages will be freed exactly as allocated. 00:05:55.624 EAL: No shared files mode enabled, IPC is disabled 00:05:55.624 EAL: No shared files mode enabled, IPC is disabled 00:05:55.624 EAL: TSC frequency is ~2290000 KHz 00:05:55.624 EAL: Main lcore 0 is ready (tid=7efc7b24ca00;cpuset=[0]) 00:05:55.624 EAL: Trying to obtain current memory policy. 00:05:55.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.624 EAL: Restoring previous memory policy: 0 00:05:55.624 EAL: request: mp_malloc_sync 00:05:55.624 EAL: No shared files mode enabled, IPC is disabled 00:05:55.624 EAL: Heap on socket 0 was expanded by 2MB 00:05:55.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:55.624 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:55.624 EAL: Mem event callback 'spdk:(nil)' registered 00:05:55.624 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:55.624 00:05:55.624 00:05:55.624 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.624 http://cunit.sourceforge.net/ 00:05:55.624 00:05:55.624 00:05:55.624 Suite: components_suite 00:05:55.624 Test: vtophys_malloc_test ...passed 00:05:55.624 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:55.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.624 EAL: Restoring previous memory policy: 4 00:05:55.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.624 EAL: request: mp_malloc_sync 00:05:55.624 EAL: No shared files mode enabled, IPC is disabled 00:05:55.624 EAL: Heap on socket 0 was expanded by 4MB 00:05:55.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.624 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was shrunk by 4MB 00:05:55.625 EAL: Trying to obtain current memory policy. 00:05:55.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.625 EAL: Restoring previous memory policy: 4 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was expanded by 6MB 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was shrunk by 6MB 00:05:55.625 EAL: Trying to obtain current memory policy. 00:05:55.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.625 EAL: Restoring previous memory policy: 4 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was expanded by 10MB 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was shrunk by 10MB 00:05:55.625 EAL: Trying to obtain current memory policy. 00:05:55.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.625 EAL: Restoring previous memory policy: 4 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was expanded by 18MB 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was shrunk by 18MB 00:05:55.625 EAL: Trying to obtain current memory policy. 00:05:55.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.625 EAL: Restoring previous memory policy: 4 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was expanded by 34MB 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was shrunk by 34MB 00:05:55.625 EAL: Trying to obtain current memory policy. 00:05:55.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.625 EAL: Restoring previous memory policy: 4 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was expanded by 66MB 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was shrunk by 66MB 00:05:55.625 EAL: Trying to obtain current memory policy. 00:05:55.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.625 EAL: Restoring previous memory policy: 4 00:05:55.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.625 EAL: request: mp_malloc_sync 00:05:55.625 EAL: No shared files mode enabled, IPC is disabled 00:05:55.625 EAL: Heap on socket 0 was expanded by 130MB 00:05:55.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.883 EAL: request: mp_malloc_sync 00:05:55.883 EAL: No shared files mode enabled, IPC is disabled 00:05:55.883 EAL: Heap on socket 0 was shrunk by 130MB 00:05:55.883 EAL: Trying to obtain current memory policy. 00:05:55.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.883 EAL: Restoring previous memory policy: 4 00:05:55.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.883 EAL: request: mp_malloc_sync 00:05:55.883 EAL: No shared files mode enabled, IPC is disabled 00:05:55.883 EAL: Heap on socket 0 was expanded by 258MB 00:05:55.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.883 EAL: request: mp_malloc_sync 00:05:55.883 EAL: No shared files mode enabled, IPC is disabled 00:05:55.883 EAL: Heap on socket 0 was shrunk by 258MB 00:05:55.883 EAL: Trying to obtain current memory policy. 00:05:55.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.142 EAL: Restoring previous memory policy: 4 00:05:56.142 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.142 EAL: request: mp_malloc_sync 00:05:56.142 EAL: No shared files mode enabled, IPC is disabled 00:05:56.142 EAL: Heap on socket 0 was expanded by 514MB 00:05:56.142 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.142 EAL: request: mp_malloc_sync 00:05:56.142 EAL: No shared files mode enabled, IPC is disabled 00:05:56.142 EAL: Heap on socket 0 was shrunk by 514MB 00:05:56.142 EAL: Trying to obtain current memory policy. 00:05:56.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.400 EAL: Restoring previous memory policy: 4 00:05:56.400 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.400 EAL: request: mp_malloc_sync 00:05:56.400 EAL: No shared files mode enabled, IPC is disabled 00:05:56.400 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.400 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.659 passed 00:05:56.659 00:05:56.659 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.659 suites 1 1 n/a 0 0 00:05:56.659 tests 2 2 2 0 0 00:05:56.659 asserts 5246 5246 5246 0 n/a 00:05:56.659 00:05:56.659 Elapsed time = 0.988 seconds 00:05:56.659 EAL: request: mp_malloc_sync 00:05:56.659 EAL: No shared files mode enabled, IPC is disabled 00:05:56.659 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:56.659 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.659 EAL: request: mp_malloc_sync 00:05:56.659 EAL: No shared files mode enabled, IPC is disabled 00:05:56.659 EAL: Heap on socket 0 was shrunk by 2MB 00:05:56.659 EAL: No shared files mode enabled, IPC is disabled 00:05:56.659 EAL: No shared files mode enabled, IPC is disabled 00:05:56.659 EAL: No shared files mode enabled, IPC is disabled 00:05:56.659 00:05:56.659 real 0m1.190s 00:05:56.659 user 0m0.646s 00:05:56.659 sys 0m0.417s 00:05:56.659 07:20:29 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.659 07:20:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:56.659 ************************************ 00:05:56.659 END TEST env_vtophys 00:05:56.659 ************************************ 00:05:56.659 07:20:29 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.659 07:20:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.659 07:20:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.659 07:20:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.659 ************************************ 00:05:56.659 START TEST env_pci 00:05:56.659 ************************************ 00:05:56.659 07:20:29 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.659 00:05:56.659 00:05:56.659 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.659 http://cunit.sourceforge.net/ 00:05:56.659 00:05:56.659 00:05:56.659 Suite: pci 00:05:56.659 Test: pci_hook ...[2024-07-25 07:20:29.370516] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60603 has claimed it 00:05:56.659 passed 00:05:56.659 00:05:56.659 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.659 suites 1 1 n/a 0 0 00:05:56.659 tests 1 1 1 0 0 00:05:56.659 asserts 25 25 25 0 n/a 00:05:56.659 00:05:56.659 Elapsed time = 0.003 seconds 00:05:56.659 EAL: Cannot find device (10000:00:01.0) 00:05:56.659 EAL: Failed to attach device on primary process 00:05:56.659 00:05:56.659 real 0m0.027s 00:05:56.659 user 0m0.011s 00:05:56.659 sys 0m0.016s 00:05:56.659 07:20:29 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.659 07:20:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:56.659 ************************************ 00:05:56.659 END TEST env_pci 00:05:56.659 ************************************ 00:05:56.918 07:20:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.918 07:20:29 env -- env/env.sh@15 -- # uname 00:05:56.918 07:20:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:56.918 07:20:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:56.918 07:20:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.918 07:20:29 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:56.918 07:20:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.918 07:20:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.918 ************************************ 00:05:56.918 START TEST env_dpdk_post_init 00:05:56.918 ************************************ 00:05:56.918 07:20:29 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.918 EAL: Detected CPU lcores: 10 00:05:56.918 EAL: Detected NUMA nodes: 1 00:05:56.918 EAL: Detected shared linkage of DPDK 00:05:56.918 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.918 EAL: Selected IOVA mode 'PA' 00:05:56.918 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.918 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:56.918 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:56.918 Starting DPDK initialization... 00:05:56.918 Starting SPDK post initialization... 00:05:56.918 SPDK NVMe probe 00:05:56.918 Attaching to 0000:00:10.0 00:05:56.918 Attaching to 0000:00:11.0 00:05:56.918 Attached to 0000:00:10.0 00:05:56.918 Attached to 0000:00:11.0 00:05:56.918 Cleaning up... 00:05:56.918 00:05:56.918 real 0m0.191s 00:05:56.918 user 0m0.059s 00:05:56.918 sys 0m0.033s 00:05:56.918 07:20:29 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.918 07:20:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.918 ************************************ 00:05:56.918 END TEST env_dpdk_post_init 00:05:56.918 ************************************ 00:05:57.176 07:20:29 env -- env/env.sh@26 -- # uname 00:05:57.176 07:20:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:57.176 07:20:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.176 07:20:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.176 07:20:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.176 07:20:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 ************************************ 00:05:57.176 START TEST env_mem_callbacks 00:05:57.176 ************************************ 00:05:57.176 07:20:29 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.176 EAL: Detected CPU lcores: 10 00:05:57.176 EAL: Detected NUMA nodes: 1 00:05:57.176 EAL: Detected shared linkage of DPDK 00:05:57.176 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.176 EAL: Selected IOVA mode 'PA' 00:05:57.176 00:05:57.176 00:05:57.176 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.176 http://cunit.sourceforge.net/ 00:05:57.176 00:05:57.176 00:05:57.176 Suite: memory 00:05:57.176 Test: test ... 00:05:57.176 register 0x200000200000 2097152 00:05:57.176 malloc 3145728 00:05:57.176 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.176 register 0x200000400000 4194304 00:05:57.176 buf 0x200000500000 len 3145728 PASSED 00:05:57.176 malloc 64 00:05:57.176 buf 0x2000004fff40 len 64 PASSED 00:05:57.176 malloc 4194304 00:05:57.176 register 0x200000800000 6291456 00:05:57.176 buf 0x200000a00000 len 4194304 PASSED 00:05:57.176 free 0x200000500000 3145728 00:05:57.176 free 0x2000004fff40 64 00:05:57.176 unregister 0x200000400000 4194304 PASSED 00:05:57.176 free 0x200000a00000 4194304 00:05:57.176 unregister 0x200000800000 6291456 PASSED 00:05:57.176 malloc 8388608 00:05:57.176 register 0x200000400000 10485760 00:05:57.176 buf 0x200000600000 len 8388608 PASSED 00:05:57.176 free 0x200000600000 8388608 00:05:57.176 unregister 0x200000400000 10485760 PASSED 00:05:57.176 passed 00:05:57.176 00:05:57.176 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.176 suites 1 1 n/a 0 0 00:05:57.176 tests 1 1 1 0 0 00:05:57.176 asserts 15 15 15 0 n/a 00:05:57.176 00:05:57.176 Elapsed time = 0.010 seconds 00:05:57.176 00:05:57.176 real 0m0.149s 00:05:57.176 user 0m0.020s 00:05:57.176 sys 0m0.027s 00:05:57.176 07:20:29 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.176 07:20:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 ************************************ 00:05:57.176 END TEST env_mem_callbacks 00:05:57.176 ************************************ 00:05:57.176 00:05:57.176 real 0m2.182s 00:05:57.176 user 0m1.031s 00:05:57.176 sys 0m0.832s 00:05:57.176 07:20:29 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.176 07:20:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 ************************************ 00:05:57.176 END TEST env 00:05:57.176 ************************************ 00:05:57.440 07:20:29 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:57.440 07:20:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.440 07:20:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.440 07:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:57.440 ************************************ 00:05:57.440 START TEST rpc 00:05:57.440 ************************************ 00:05:57.440 07:20:29 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:57.440 * Looking for test storage... 00:05:57.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:57.440 07:20:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60713 00:05:57.440 07:20:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:57.440 07:20:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.440 07:20:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60713 00:05:57.440 07:20:30 rpc -- common/autotest_common.sh@829 -- # '[' -z 60713 ']' 00:05:57.440 07:20:30 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.440 07:20:30 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.440 07:20:30 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.440 07:20:30 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.440 07:20:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.440 [2024-07-25 07:20:30.140940] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:05:57.440 [2024-07-25 07:20:30.141033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:05:57.698 [2024-07-25 07:20:30.279485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.698 [2024-07-25 07:20:30.375161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:57.698 [2024-07-25 07:20:30.375210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60713' to capture a snapshot of events at runtime. 00:05:57.698 [2024-07-25 07:20:30.375218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.698 [2024-07-25 07:20:30.375224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.698 [2024-07-25 07:20:30.375228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60713 for offline analysis/debug. 00:05:57.698 [2024-07-25 07:20:30.375256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.634 07:20:31 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.634 07:20:31 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.634 07:20:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.634 07:20:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.634 07:20:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:58.634 07:20:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:58.634 07:20:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.634 07:20:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.634 07:20:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.634 ************************************ 00:05:58.634 START TEST rpc_integrity 00:05:58.634 ************************************ 00:05:58.634 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.635 { 00:05:58.635 "aliases": [ 00:05:58.635 "5d904ad5-4c31-4728-8022-d4e9c26c9735" 00:05:58.635 ], 00:05:58.635 "assigned_rate_limits": { 00:05:58.635 "r_mbytes_per_sec": 0, 00:05:58.635 "rw_ios_per_sec": 0, 00:05:58.635 "rw_mbytes_per_sec": 0, 00:05:58.635 "w_mbytes_per_sec": 0 00:05:58.635 }, 00:05:58.635 "block_size": 512, 00:05:58.635 "claimed": false, 00:05:58.635 "driver_specific": {}, 00:05:58.635 "memory_domains": [ 00:05:58.635 { 00:05:58.635 "dma_device_id": "system", 00:05:58.635 "dma_device_type": 1 00:05:58.635 }, 00:05:58.635 { 00:05:58.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.635 "dma_device_type": 2 00:05:58.635 } 00:05:58.635 ], 00:05:58.635 "name": "Malloc0", 00:05:58.635 "num_blocks": 16384, 00:05:58.635 "product_name": "Malloc disk", 00:05:58.635 "supported_io_types": { 00:05:58.635 "abort": true, 00:05:58.635 "compare": false, 00:05:58.635 "compare_and_write": false, 00:05:58.635 "copy": true, 00:05:58.635 "flush": true, 00:05:58.635 "get_zone_info": false, 00:05:58.635 "nvme_admin": false, 00:05:58.635 "nvme_io": false, 00:05:58.635 "nvme_io_md": false, 00:05:58.635 "nvme_iov_md": false, 00:05:58.635 "read": true, 00:05:58.635 "reset": true, 00:05:58.635 "seek_data": false, 00:05:58.635 "seek_hole": false, 00:05:58.635 "unmap": true, 00:05:58.635 "write": true, 00:05:58.635 "write_zeroes": true, 00:05:58.635 "zcopy": true, 00:05:58.635 "zone_append": false, 00:05:58.635 "zone_management": false 00:05:58.635 }, 00:05:58.635 "uuid": "5d904ad5-4c31-4728-8022-d4e9c26c9735", 00:05:58.635 "zoned": false 00:05:58.635 } 00:05:58.635 ]' 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 [2024-07-25 07:20:31.185513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:58.635 [2024-07-25 07:20:31.185555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.635 [2024-07-25 07:20:31.185573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61cad0 00:05:58.635 [2024-07-25 07:20:31.185579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.635 [2024-07-25 07:20:31.186974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.635 [2024-07-25 07:20:31.187008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.635 Passthru0 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.635 { 00:05:58.635 "aliases": [ 00:05:58.635 "5d904ad5-4c31-4728-8022-d4e9c26c9735" 00:05:58.635 ], 00:05:58.635 "assigned_rate_limits": { 00:05:58.635 "r_mbytes_per_sec": 0, 00:05:58.635 "rw_ios_per_sec": 0, 00:05:58.635 "rw_mbytes_per_sec": 0, 00:05:58.635 "w_mbytes_per_sec": 0 00:05:58.635 }, 00:05:58.635 "block_size": 512, 00:05:58.635 "claim_type": "exclusive_write", 00:05:58.635 "claimed": true, 00:05:58.635 "driver_specific": {}, 00:05:58.635 "memory_domains": [ 00:05:58.635 { 00:05:58.635 "dma_device_id": "system", 00:05:58.635 "dma_device_type": 1 00:05:58.635 }, 00:05:58.635 { 00:05:58.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.635 "dma_device_type": 2 00:05:58.635 } 00:05:58.635 ], 00:05:58.635 "name": "Malloc0", 00:05:58.635 "num_blocks": 16384, 00:05:58.635 "product_name": "Malloc disk", 00:05:58.635 "supported_io_types": { 00:05:58.635 "abort": true, 00:05:58.635 "compare": false, 00:05:58.635 "compare_and_write": false, 00:05:58.635 "copy": true, 00:05:58.635 "flush": true, 00:05:58.635 "get_zone_info": false, 00:05:58.635 "nvme_admin": false, 00:05:58.635 "nvme_io": false, 00:05:58.635 "nvme_io_md": false, 00:05:58.635 "nvme_iov_md": false, 00:05:58.635 "read": true, 00:05:58.635 "reset": true, 00:05:58.635 "seek_data": false, 00:05:58.635 "seek_hole": false, 00:05:58.635 "unmap": true, 00:05:58.635 "write": true, 00:05:58.635 "write_zeroes": true, 00:05:58.635 "zcopy": true, 00:05:58.635 "zone_append": false, 00:05:58.635 "zone_management": false 00:05:58.635 }, 00:05:58.635 "uuid": "5d904ad5-4c31-4728-8022-d4e9c26c9735", 00:05:58.635 "zoned": false 00:05:58.635 }, 00:05:58.635 { 00:05:58.635 "aliases": [ 00:05:58.635 "52a26a9c-05c8-5fd9-9996-1b6d6ff19b7f" 00:05:58.635 ], 00:05:58.635 "assigned_rate_limits": { 00:05:58.635 "r_mbytes_per_sec": 0, 00:05:58.635 "rw_ios_per_sec": 0, 00:05:58.635 "rw_mbytes_per_sec": 0, 00:05:58.635 "w_mbytes_per_sec": 0 00:05:58.635 }, 00:05:58.635 "block_size": 512, 00:05:58.635 "claimed": false, 00:05:58.635 "driver_specific": { 00:05:58.635 "passthru": { 00:05:58.635 "base_bdev_name": "Malloc0", 00:05:58.635 "name": "Passthru0" 00:05:58.635 } 00:05:58.635 }, 00:05:58.635 "memory_domains": [ 00:05:58.635 { 00:05:58.635 "dma_device_id": "system", 00:05:58.635 "dma_device_type": 1 00:05:58.635 }, 00:05:58.635 { 00:05:58.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.635 "dma_device_type": 2 00:05:58.635 } 00:05:58.635 ], 00:05:58.635 "name": "Passthru0", 00:05:58.635 "num_blocks": 16384, 00:05:58.635 "product_name": "passthru", 00:05:58.635 "supported_io_types": { 00:05:58.635 "abort": true, 00:05:58.635 "compare": false, 00:05:58.635 "compare_and_write": false, 00:05:58.635 "copy": true, 00:05:58.635 "flush": true, 00:05:58.635 "get_zone_info": false, 00:05:58.635 "nvme_admin": false, 00:05:58.635 "nvme_io": false, 00:05:58.635 "nvme_io_md": false, 00:05:58.635 "nvme_iov_md": false, 00:05:58.635 "read": true, 00:05:58.635 "reset": true, 00:05:58.635 "seek_data": false, 00:05:58.635 "seek_hole": false, 00:05:58.635 "unmap": true, 00:05:58.635 "write": true, 00:05:58.635 "write_zeroes": true, 00:05:58.635 "zcopy": true, 00:05:58.635 "zone_append": false, 00:05:58.635 "zone_management": false 00:05:58.635 }, 00:05:58.635 "uuid": "52a26a9c-05c8-5fd9-9996-1b6d6ff19b7f", 00:05:58.635 "zoned": false 00:05:58.635 } 00:05:58.635 ]' 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.635 07:20:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.635 00:05:58.635 real 0m0.323s 00:05:58.635 user 0m0.196s 00:05:58.635 sys 0m0.045s 00:05:58.636 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.636 07:20:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.636 ************************************ 00:05:58.636 END TEST rpc_integrity 00:05:58.636 ************************************ 00:05:58.895 07:20:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:58.895 07:20:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.895 07:20:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.895 07:20:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 ************************************ 00:05:58.895 START TEST rpc_plugins 00:05:58.895 ************************************ 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:58.895 { 00:05:58.895 "aliases": [ 00:05:58.895 "57886999-155c-42b3-8ff9-57ad36ba09fe" 00:05:58.895 ], 00:05:58.895 "assigned_rate_limits": { 00:05:58.895 "r_mbytes_per_sec": 0, 00:05:58.895 "rw_ios_per_sec": 0, 00:05:58.895 "rw_mbytes_per_sec": 0, 00:05:58.895 "w_mbytes_per_sec": 0 00:05:58.895 }, 00:05:58.895 "block_size": 4096, 00:05:58.895 "claimed": false, 00:05:58.895 "driver_specific": {}, 00:05:58.895 "memory_domains": [ 00:05:58.895 { 00:05:58.895 "dma_device_id": "system", 00:05:58.895 "dma_device_type": 1 00:05:58.895 }, 00:05:58.895 { 00:05:58.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.895 "dma_device_type": 2 00:05:58.895 } 00:05:58.895 ], 00:05:58.895 "name": "Malloc1", 00:05:58.895 "num_blocks": 256, 00:05:58.895 "product_name": "Malloc disk", 00:05:58.895 "supported_io_types": { 00:05:58.895 "abort": true, 00:05:58.895 "compare": false, 00:05:58.895 "compare_and_write": false, 00:05:58.895 "copy": true, 00:05:58.895 "flush": true, 00:05:58.895 "get_zone_info": false, 00:05:58.895 "nvme_admin": false, 00:05:58.895 "nvme_io": false, 00:05:58.895 "nvme_io_md": false, 00:05:58.895 "nvme_iov_md": false, 00:05:58.895 "read": true, 00:05:58.895 "reset": true, 00:05:58.895 "seek_data": false, 00:05:58.895 "seek_hole": false, 00:05:58.895 "unmap": true, 00:05:58.895 "write": true, 00:05:58.895 "write_zeroes": true, 00:05:58.895 "zcopy": true, 00:05:58.895 "zone_append": false, 00:05:58.895 "zone_management": false 00:05:58.895 }, 00:05:58.895 "uuid": "57886999-155c-42b3-8ff9-57ad36ba09fe", 00:05:58.895 "zoned": false 00:05:58.895 } 00:05:58.895 ]' 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:58.895 07:20:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:58.895 00:05:58.895 real 0m0.155s 00:05:58.895 user 0m0.089s 00:05:58.895 sys 0m0.029s 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.895 07:20:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 ************************************ 00:05:58.895 END TEST rpc_plugins 00:05:58.895 ************************************ 00:05:58.895 07:20:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:58.895 07:20:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.895 07:20:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.895 07:20:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 ************************************ 00:05:58.895 START TEST rpc_trace_cmd_test 00:05:58.895 ************************************ 00:05:58.895 07:20:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:58.895 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:58.895 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:58.895 07:20:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.895 07:20:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.155 07:20:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.155 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:59.155 "bdev": { 00:05:59.155 "mask": "0x8", 00:05:59.155 "tpoint_mask": "0xffffffffffffffff" 00:05:59.155 }, 00:05:59.155 "bdev_nvme": { 00:05:59.155 "mask": "0x4000", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "blobfs": { 00:05:59.155 "mask": "0x80", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "dsa": { 00:05:59.155 "mask": "0x200", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "ftl": { 00:05:59.155 "mask": "0x40", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "iaa": { 00:05:59.155 "mask": "0x1000", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "iscsi_conn": { 00:05:59.155 "mask": "0x2", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "nvme_pcie": { 00:05:59.155 "mask": "0x800", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "nvme_tcp": { 00:05:59.155 "mask": "0x2000", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "nvmf_rdma": { 00:05:59.155 "mask": "0x10", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "nvmf_tcp": { 00:05:59.155 "mask": "0x20", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "scsi": { 00:05:59.155 "mask": "0x4", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "sock": { 00:05:59.155 "mask": "0x8000", 00:05:59.155 "tpoint_mask": "0x0" 00:05:59.155 }, 00:05:59.155 "thread": { 00:05:59.155 "mask": "0x400", 00:05:59.156 "tpoint_mask": "0x0" 00:05:59.156 }, 00:05:59.156 "tpoint_group_mask": "0x8", 00:05:59.156 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60713" 00:05:59.156 }' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:59.156 00:05:59.156 real 0m0.255s 00:05:59.156 user 0m0.219s 00:05:59.156 sys 0m0.024s 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.156 07:20:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.156 ************************************ 00:05:59.156 END TEST rpc_trace_cmd_test 00:05:59.156 ************************************ 00:05:59.415 07:20:31 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:59.415 07:20:31 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:59.415 07:20:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.415 07:20:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.415 07:20:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.415 ************************************ 00:05:59.415 START TEST go_rpc 00:05:59.415 ************************************ 00:05:59.415 07:20:31 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:59.415 07:20:31 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:59.415 07:20:31 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:59.415 07:20:31 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:59.415 07:20:31 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:59.415 07:20:31 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:59.415 07:20:31 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.415 07:20:31 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.415 07:20:32 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["bbff70ee-d427-4872-9981-21b8f9db5138"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"bbff70ee-d427-4872-9981-21b8f9db5138","zoned":false}]' 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:59.415 07:20:32 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.415 07:20:32 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.415 07:20:32 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:59.415 07:20:32 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:59.415 00:05:59.415 real 0m0.221s 00:05:59.415 user 0m0.133s 00:05:59.415 sys 0m0.056s 00:05:59.415 07:20:32 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.415 07:20:32 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.415 ************************************ 00:05:59.415 END TEST go_rpc 00:05:59.415 ************************************ 00:05:59.675 07:20:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:59.675 07:20:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:59.675 07:20:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.675 07:20:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.675 07:20:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.675 ************************************ 00:05:59.675 START TEST rpc_daemon_integrity 00:05:59.675 ************************************ 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.675 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:59.675 { 00:05:59.675 "aliases": [ 00:05:59.675 "08960d56-f377-4268-b6d3-c0fe99ed33e9" 00:05:59.675 ], 00:05:59.675 "assigned_rate_limits": { 00:05:59.675 "r_mbytes_per_sec": 0, 00:05:59.675 "rw_ios_per_sec": 0, 00:05:59.675 "rw_mbytes_per_sec": 0, 00:05:59.675 "w_mbytes_per_sec": 0 00:05:59.675 }, 00:05:59.675 "block_size": 512, 00:05:59.675 "claimed": false, 00:05:59.675 "driver_specific": {}, 00:05:59.675 "memory_domains": [ 00:05:59.675 { 00:05:59.675 "dma_device_id": "system", 00:05:59.675 "dma_device_type": 1 00:05:59.675 }, 00:05:59.675 { 00:05:59.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.675 "dma_device_type": 2 00:05:59.675 } 00:05:59.675 ], 00:05:59.675 "name": "Malloc3", 00:05:59.675 "num_blocks": 16384, 00:05:59.675 "product_name": "Malloc disk", 00:05:59.675 "supported_io_types": { 00:05:59.675 "abort": true, 00:05:59.675 "compare": false, 00:05:59.675 "compare_and_write": false, 00:05:59.675 "copy": true, 00:05:59.675 "flush": true, 00:05:59.675 "get_zone_info": false, 00:05:59.675 "nvme_admin": false, 00:05:59.675 "nvme_io": false, 00:05:59.675 "nvme_io_md": false, 00:05:59.675 "nvme_iov_md": false, 00:05:59.675 "read": true, 00:05:59.675 "reset": true, 00:05:59.675 "seek_data": false, 00:05:59.675 "seek_hole": false, 00:05:59.675 "unmap": true, 00:05:59.675 "write": true, 00:05:59.676 "write_zeroes": true, 00:05:59.676 "zcopy": true, 00:05:59.676 "zone_append": false, 00:05:59.676 "zone_management": false 00:05:59.676 }, 00:05:59.676 "uuid": "08960d56-f377-4268-b6d3-c0fe99ed33e9", 00:05:59.676 "zoned": false 00:05:59.676 } 00:05:59.676 ]' 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.676 [2024-07-25 07:20:32.367685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:59.676 [2024-07-25 07:20:32.367739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:59.676 [2024-07-25 07:20:32.367753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x813d70 00:05:59.676 [2024-07-25 07:20:32.367760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:59.676 [2024-07-25 07:20:32.369183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:59.676 [2024-07-25 07:20:32.369214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:59.676 Passthru0 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:59.676 { 00:05:59.676 "aliases": [ 00:05:59.676 "08960d56-f377-4268-b6d3-c0fe99ed33e9" 00:05:59.676 ], 00:05:59.676 "assigned_rate_limits": { 00:05:59.676 "r_mbytes_per_sec": 0, 00:05:59.676 "rw_ios_per_sec": 0, 00:05:59.676 "rw_mbytes_per_sec": 0, 00:05:59.676 "w_mbytes_per_sec": 0 00:05:59.676 }, 00:05:59.676 "block_size": 512, 00:05:59.676 "claim_type": "exclusive_write", 00:05:59.676 "claimed": true, 00:05:59.676 "driver_specific": {}, 00:05:59.676 "memory_domains": [ 00:05:59.676 { 00:05:59.676 "dma_device_id": "system", 00:05:59.676 "dma_device_type": 1 00:05:59.676 }, 00:05:59.676 { 00:05:59.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.676 "dma_device_type": 2 00:05:59.676 } 00:05:59.676 ], 00:05:59.676 "name": "Malloc3", 00:05:59.676 "num_blocks": 16384, 00:05:59.676 "product_name": "Malloc disk", 00:05:59.676 "supported_io_types": { 00:05:59.676 "abort": true, 00:05:59.676 "compare": false, 00:05:59.676 "compare_and_write": false, 00:05:59.676 "copy": true, 00:05:59.676 "flush": true, 00:05:59.676 "get_zone_info": false, 00:05:59.676 "nvme_admin": false, 00:05:59.676 "nvme_io": false, 00:05:59.676 "nvme_io_md": false, 00:05:59.676 "nvme_iov_md": false, 00:05:59.676 "read": true, 00:05:59.676 "reset": true, 00:05:59.676 "seek_data": false, 00:05:59.676 "seek_hole": false, 00:05:59.676 "unmap": true, 00:05:59.676 "write": true, 00:05:59.676 "write_zeroes": true, 00:05:59.676 "zcopy": true, 00:05:59.676 "zone_append": false, 00:05:59.676 "zone_management": false 00:05:59.676 }, 00:05:59.676 "uuid": "08960d56-f377-4268-b6d3-c0fe99ed33e9", 00:05:59.676 "zoned": false 00:05:59.676 }, 00:05:59.676 { 00:05:59.676 "aliases": [ 00:05:59.676 "9f3019a9-8d05-55c7-ab7c-767d198fcf61" 00:05:59.676 ], 00:05:59.676 "assigned_rate_limits": { 00:05:59.676 "r_mbytes_per_sec": 0, 00:05:59.676 "rw_ios_per_sec": 0, 00:05:59.676 "rw_mbytes_per_sec": 0, 00:05:59.676 "w_mbytes_per_sec": 0 00:05:59.676 }, 00:05:59.676 "block_size": 512, 00:05:59.676 "claimed": false, 00:05:59.676 "driver_specific": { 00:05:59.676 "passthru": { 00:05:59.676 "base_bdev_name": "Malloc3", 00:05:59.676 "name": "Passthru0" 00:05:59.676 } 00:05:59.676 }, 00:05:59.676 "memory_domains": [ 00:05:59.676 { 00:05:59.676 "dma_device_id": "system", 00:05:59.676 "dma_device_type": 1 00:05:59.676 }, 00:05:59.676 { 00:05:59.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.676 "dma_device_type": 2 00:05:59.676 } 00:05:59.676 ], 00:05:59.676 "name": "Passthru0", 00:05:59.676 "num_blocks": 16384, 00:05:59.676 "product_name": "passthru", 00:05:59.676 "supported_io_types": { 00:05:59.676 "abort": true, 00:05:59.676 "compare": false, 00:05:59.676 "compare_and_write": false, 00:05:59.676 "copy": true, 00:05:59.676 "flush": true, 00:05:59.676 "get_zone_info": false, 00:05:59.676 "nvme_admin": false, 00:05:59.676 "nvme_io": false, 00:05:59.676 "nvme_io_md": false, 00:05:59.676 "nvme_iov_md": false, 00:05:59.676 "read": true, 00:05:59.676 "reset": true, 00:05:59.676 "seek_data": false, 00:05:59.676 "seek_hole": false, 00:05:59.676 "unmap": true, 00:05:59.676 "write": true, 00:05:59.676 "write_zeroes": true, 00:05:59.676 "zcopy": true, 00:05:59.676 "zone_append": false, 00:05:59.676 "zone_management": false 00:05:59.676 }, 00:05:59.676 "uuid": "9f3019a9-8d05-55c7-ab7c-767d198fcf61", 00:05:59.676 "zoned": false 00:05:59.676 } 00:05:59.676 ]' 00:05:59.676 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:59.936 ************************************ 00:05:59.936 END TEST rpc_daemon_integrity 00:05:59.936 ************************************ 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:59.936 00:05:59.936 real 0m0.316s 00:05:59.936 user 0m0.192s 00:05:59.936 sys 0m0.056s 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.936 07:20:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 07:20:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:59.936 07:20:32 rpc -- rpc/rpc.sh@84 -- # killprocess 60713 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@948 -- # '[' -z 60713 ']' 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@952 -- # kill -0 60713 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@953 -- # uname 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60713 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.936 killing process with pid 60713 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60713' 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@967 -- # kill 60713 00:05:59.936 07:20:32 rpc -- common/autotest_common.sh@972 -- # wait 60713 00:06:00.504 00:06:00.504 real 0m2.966s 00:06:00.504 user 0m3.843s 00:06:00.504 sys 0m0.802s 00:06:00.504 07:20:32 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.504 07:20:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.504 ************************************ 00:06:00.504 END TEST rpc 00:06:00.504 ************************************ 00:06:00.504 07:20:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:00.504 07:20:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.504 07:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.504 07:20:32 -- common/autotest_common.sh@10 -- # set +x 00:06:00.504 ************************************ 00:06:00.504 START TEST skip_rpc 00:06:00.504 ************************************ 00:06:00.504 07:20:32 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:00.504 * Looking for test storage... 00:06:00.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.504 07:20:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.504 07:20:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.504 07:20:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:00.504 07:20:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.504 07:20:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.504 07:20:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.504 ************************************ 00:06:00.504 START TEST skip_rpc 00:06:00.504 ************************************ 00:06:00.504 07:20:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:00.504 07:20:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60974 00:06:00.504 07:20:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:00.504 07:20:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.504 07:20:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:00.504 [2024-07-25 07:20:33.186086] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:00.504 [2024-07-25 07:20:33.186166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60974 ] 00:06:00.762 [2024-07-25 07:20:33.322978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.762 [2024-07-25 07:20:33.419277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.031 2024/07/25 07:20:38 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60974 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60974 ']' 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60974 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60974 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60974' 00:06:06.031 killing process with pid 60974 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60974 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60974 00:06:06.031 00:06:06.031 real 0m5.380s 00:06:06.031 user 0m5.058s 00:06:06.031 sys 0m0.243s 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.031 07:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.031 ************************************ 00:06:06.031 END TEST skip_rpc 00:06:06.031 ************************************ 00:06:06.031 07:20:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:06.031 07:20:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.031 07:20:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.031 07:20:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.031 ************************************ 00:06:06.032 START TEST skip_rpc_with_json 00:06:06.032 ************************************ 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61065 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61065 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61065 ']' 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.032 07:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.032 [2024-07-25 07:20:38.624301] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:06.032 [2024-07-25 07:20:38.624419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61065 ] 00:06:06.290 [2024-07-25 07:20:38.767509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.290 [2024-07-25 07:20:38.867786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.855 [2024-07-25 07:20:39.506530] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:06.855 2024/07/25 07:20:39 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:06:06.855 request: 00:06:06.855 { 00:06:06.855 "method": "nvmf_get_transports", 00:06:06.855 "params": { 00:06:06.855 "trtype": "tcp" 00:06:06.855 } 00:06:06.855 } 00:06:06.855 Got JSON-RPC error response 00:06:06.855 GoRPCClient: error on JSON-RPC call 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.855 [2024-07-25 07:20:39.518583] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.855 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.114 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.114 07:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:07.114 { 00:06:07.114 "subsystems": [ 00:06:07.114 { 00:06:07.114 "subsystem": "keyring", 00:06:07.114 "config": [] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "iobuf", 00:06:07.114 "config": [ 00:06:07.114 { 00:06:07.114 "method": "iobuf_set_options", 00:06:07.114 "params": { 00:06:07.114 "large_bufsize": 135168, 00:06:07.114 "large_pool_count": 1024, 00:06:07.114 "small_bufsize": 8192, 00:06:07.114 "small_pool_count": 8192 00:06:07.114 } 00:06:07.114 } 00:06:07.114 ] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "sock", 00:06:07.114 "config": [ 00:06:07.114 { 00:06:07.114 "method": "sock_set_default_impl", 00:06:07.114 "params": { 00:06:07.114 "impl_name": "posix" 00:06:07.114 } 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "method": "sock_impl_set_options", 00:06:07.114 "params": { 00:06:07.114 "enable_ktls": false, 00:06:07.114 "enable_placement_id": 0, 00:06:07.114 "enable_quickack": false, 00:06:07.114 "enable_recv_pipe": true, 00:06:07.114 "enable_zerocopy_send_client": false, 00:06:07.114 "enable_zerocopy_send_server": true, 00:06:07.114 "impl_name": "ssl", 00:06:07.114 "recv_buf_size": 4096, 00:06:07.114 "send_buf_size": 4096, 00:06:07.114 "tls_version": 0, 00:06:07.114 "zerocopy_threshold": 0 00:06:07.114 } 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "method": "sock_impl_set_options", 00:06:07.114 "params": { 00:06:07.114 "enable_ktls": false, 00:06:07.114 "enable_placement_id": 0, 00:06:07.114 "enable_quickack": false, 00:06:07.114 "enable_recv_pipe": true, 00:06:07.114 "enable_zerocopy_send_client": false, 00:06:07.114 "enable_zerocopy_send_server": true, 00:06:07.114 "impl_name": "posix", 00:06:07.114 "recv_buf_size": 2097152, 00:06:07.114 "send_buf_size": 2097152, 00:06:07.114 "tls_version": 0, 00:06:07.114 "zerocopy_threshold": 0 00:06:07.114 } 00:06:07.114 } 00:06:07.114 ] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "vmd", 00:06:07.114 "config": [] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "accel", 00:06:07.114 "config": [ 00:06:07.114 { 00:06:07.114 "method": "accel_set_options", 00:06:07.114 "params": { 00:06:07.114 "buf_count": 2048, 00:06:07.114 "large_cache_size": 16, 00:06:07.114 "sequence_count": 2048, 00:06:07.114 "small_cache_size": 128, 00:06:07.114 "task_count": 2048 00:06:07.114 } 00:06:07.114 } 00:06:07.114 ] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "bdev", 00:06:07.114 "config": [ 00:06:07.114 { 00:06:07.114 "method": "bdev_set_options", 00:06:07.114 "params": { 00:06:07.114 "bdev_auto_examine": true, 00:06:07.114 "bdev_io_cache_size": 256, 00:06:07.114 "bdev_io_pool_size": 65535, 00:06:07.114 "iobuf_large_cache_size": 16, 00:06:07.114 "iobuf_small_cache_size": 128 00:06:07.114 } 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "method": "bdev_raid_set_options", 00:06:07.114 "params": { 00:06:07.114 "process_max_bandwidth_mb_sec": 0, 00:06:07.114 "process_window_size_kb": 1024 00:06:07.114 } 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "method": "bdev_iscsi_set_options", 00:06:07.114 "params": { 00:06:07.114 "timeout_sec": 30 00:06:07.114 } 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "method": "bdev_nvme_set_options", 00:06:07.114 "params": { 00:06:07.114 "action_on_timeout": "none", 00:06:07.114 "allow_accel_sequence": false, 00:06:07.114 "arbitration_burst": 0, 00:06:07.114 "bdev_retry_count": 3, 00:06:07.114 "ctrlr_loss_timeout_sec": 0, 00:06:07.114 "delay_cmd_submit": true, 00:06:07.114 "dhchap_dhgroups": [ 00:06:07.114 "null", 00:06:07.114 "ffdhe2048", 00:06:07.114 "ffdhe3072", 00:06:07.114 "ffdhe4096", 00:06:07.114 "ffdhe6144", 00:06:07.114 "ffdhe8192" 00:06:07.114 ], 00:06:07.114 "dhchap_digests": [ 00:06:07.114 "sha256", 00:06:07.114 "sha384", 00:06:07.114 "sha512" 00:06:07.114 ], 00:06:07.114 "disable_auto_failback": false, 00:06:07.114 "fast_io_fail_timeout_sec": 0, 00:06:07.114 "generate_uuids": false, 00:06:07.114 "high_priority_weight": 0, 00:06:07.114 "io_path_stat": false, 00:06:07.114 "io_queue_requests": 0, 00:06:07.114 "keep_alive_timeout_ms": 10000, 00:06:07.114 "low_priority_weight": 0, 00:06:07.114 "medium_priority_weight": 0, 00:06:07.114 "nvme_adminq_poll_period_us": 10000, 00:06:07.114 "nvme_error_stat": false, 00:06:07.114 "nvme_ioq_poll_period_us": 0, 00:06:07.114 "rdma_cm_event_timeout_ms": 0, 00:06:07.114 "rdma_max_cq_size": 0, 00:06:07.114 "rdma_srq_size": 0, 00:06:07.114 "reconnect_delay_sec": 0, 00:06:07.114 "timeout_admin_us": 0, 00:06:07.114 "timeout_us": 0, 00:06:07.114 "transport_ack_timeout": 0, 00:06:07.114 "transport_retry_count": 4, 00:06:07.114 "transport_tos": 0 00:06:07.114 } 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "method": "bdev_nvme_set_hotplug", 00:06:07.114 "params": { 00:06:07.114 "enable": false, 00:06:07.114 "period_us": 100000 00:06:07.114 } 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "method": "bdev_wait_for_examine" 00:06:07.114 } 00:06:07.114 ] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "scsi", 00:06:07.114 "config": null 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "scheduler", 00:06:07.114 "config": [ 00:06:07.114 { 00:06:07.114 "method": "framework_set_scheduler", 00:06:07.114 "params": { 00:06:07.114 "name": "static" 00:06:07.114 } 00:06:07.114 } 00:06:07.114 ] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "vhost_scsi", 00:06:07.114 "config": [] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "vhost_blk", 00:06:07.114 "config": [] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "ublk", 00:06:07.114 "config": [] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "nbd", 00:06:07.114 "config": [] 00:06:07.114 }, 00:06:07.114 { 00:06:07.114 "subsystem": "nvmf", 00:06:07.114 "config": [ 00:06:07.115 { 00:06:07.115 "method": "nvmf_set_config", 00:06:07.115 "params": { 00:06:07.115 "admin_cmd_passthru": { 00:06:07.115 "identify_ctrlr": false 00:06:07.115 }, 00:06:07.115 "discovery_filter": "match_any" 00:06:07.115 } 00:06:07.115 }, 00:06:07.115 { 00:06:07.115 "method": "nvmf_set_max_subsystems", 00:06:07.115 "params": { 00:06:07.115 "max_subsystems": 1024 00:06:07.115 } 00:06:07.115 }, 00:06:07.115 { 00:06:07.115 "method": "nvmf_set_crdt", 00:06:07.115 "params": { 00:06:07.115 "crdt1": 0, 00:06:07.115 "crdt2": 0, 00:06:07.115 "crdt3": 0 00:06:07.115 } 00:06:07.115 }, 00:06:07.115 { 00:06:07.115 "method": "nvmf_create_transport", 00:06:07.115 "params": { 00:06:07.115 "abort_timeout_sec": 1, 00:06:07.115 "ack_timeout": 0, 00:06:07.115 "buf_cache_size": 4294967295, 00:06:07.115 "c2h_success": true, 00:06:07.115 "data_wr_pool_size": 0, 00:06:07.115 "dif_insert_or_strip": false, 00:06:07.115 "in_capsule_data_size": 4096, 00:06:07.115 "io_unit_size": 131072, 00:06:07.115 "max_aq_depth": 128, 00:06:07.115 "max_io_qpairs_per_ctrlr": 127, 00:06:07.115 "max_io_size": 131072, 00:06:07.115 "max_queue_depth": 128, 00:06:07.115 "num_shared_buffers": 511, 00:06:07.115 "sock_priority": 0, 00:06:07.115 "trtype": "TCP", 00:06:07.115 "zcopy": false 00:06:07.115 } 00:06:07.115 } 00:06:07.115 ] 00:06:07.115 }, 00:06:07.115 { 00:06:07.115 "subsystem": "iscsi", 00:06:07.115 "config": [ 00:06:07.115 { 00:06:07.115 "method": "iscsi_set_options", 00:06:07.115 "params": { 00:06:07.115 "allow_duplicated_isid": false, 00:06:07.115 "chap_group": 0, 00:06:07.115 "data_out_pool_size": 2048, 00:06:07.115 "default_time2retain": 20, 00:06:07.115 "default_time2wait": 2, 00:06:07.115 "disable_chap": false, 00:06:07.115 "error_recovery_level": 0, 00:06:07.115 "first_burst_length": 8192, 00:06:07.115 "immediate_data": true, 00:06:07.115 "immediate_data_pool_size": 16384, 00:06:07.115 "max_connections_per_session": 2, 00:06:07.115 "max_large_datain_per_connection": 64, 00:06:07.115 "max_queue_depth": 64, 00:06:07.115 "max_r2t_per_connection": 4, 00:06:07.115 "max_sessions": 128, 00:06:07.115 "mutual_chap": false, 00:06:07.115 "node_base": "iqn.2016-06.io.spdk", 00:06:07.115 "nop_in_interval": 30, 00:06:07.115 "nop_timeout": 60, 00:06:07.115 "pdu_pool_size": 36864, 00:06:07.115 "require_chap": false 00:06:07.115 } 00:06:07.115 } 00:06:07.115 ] 00:06:07.115 } 00:06:07.115 ] 00:06:07.115 } 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61065 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61065 ']' 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61065 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61065 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.115 killing process with pid 61065 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61065' 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61065 00:06:07.115 07:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61065 00:06:07.373 07:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61100 00:06:07.373 07:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:07.373 07:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61100 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61100 ']' 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61100 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61100 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.643 killing process with pid 61100 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61100' 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61100 00:06:12.643 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61100 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.902 00:06:12.902 real 0m6.863s 00:06:12.902 user 0m6.605s 00:06:12.902 sys 0m0.567s 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.902 ************************************ 00:06:12.902 END TEST skip_rpc_with_json 00:06:12.902 ************************************ 00:06:12.902 07:20:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:12.902 07:20:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.902 07:20:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.902 07:20:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.902 ************************************ 00:06:12.902 START TEST skip_rpc_with_delay 00:06:12.902 ************************************ 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.902 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.903 [2024-07-25 07:20:45.539415] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:12.903 [2024-07-25 07:20:45.539520] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.903 00:06:12.903 real 0m0.075s 00:06:12.903 user 0m0.054s 00:06:12.903 sys 0m0.020s 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.903 07:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:12.903 ************************************ 00:06:12.903 END TEST skip_rpc_with_delay 00:06:12.903 ************************************ 00:06:12.903 07:20:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:12.903 07:20:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:12.903 07:20:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:12.903 07:20:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.903 07:20:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.903 07:20:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.903 ************************************ 00:06:12.903 START TEST exit_on_failed_rpc_init 00:06:12.903 ************************************ 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61210 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61210 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61210 ']' 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.903 07:20:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.162 [2024-07-25 07:20:45.677773] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:13.162 [2024-07-25 07:20:45.677850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61210 ] 00:06:13.162 [2024-07-25 07:20:45.816503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.421 [2024-07-25 07:20:45.925686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:13.990 07:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.990 [2024-07-25 07:20:46.662397] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:13.990 [2024-07-25 07:20:46.662480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61240 ] 00:06:14.249 [2024-07-25 07:20:46.794480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.249 [2024-07-25 07:20:46.916600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.249 [2024-07-25 07:20:46.916677] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:14.249 [2024-07-25 07:20:46.916686] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:14.249 [2024-07-25 07:20:46.916692] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61210 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61210 ']' 00:06:14.509 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61210 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61210 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.510 killing process with pid 61210 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61210' 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61210 00:06:14.510 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61210 00:06:14.769 00:06:14.769 real 0m1.753s 00:06:14.769 user 0m2.082s 00:06:14.769 sys 0m0.372s 00:06:14.769 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.769 07:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.769 ************************************ 00:06:14.769 END TEST exit_on_failed_rpc_init 00:06:14.769 ************************************ 00:06:14.769 07:20:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.769 00:06:14.769 real 0m14.431s 00:06:14.769 user 0m13.914s 00:06:14.769 sys 0m1.450s 00:06:14.769 07:20:47 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.769 07:20:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.769 ************************************ 00:06:14.769 END TEST skip_rpc 00:06:14.769 ************************************ 00:06:14.769 07:20:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:14.769 07:20:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.769 07:20:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.769 07:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:14.769 ************************************ 00:06:14.769 START TEST rpc_client 00:06:14.769 ************************************ 00:06:14.769 07:20:47 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:15.028 * Looking for test storage... 00:06:15.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:15.028 07:20:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:15.028 OK 00:06:15.028 07:20:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:15.028 00:06:15.028 real 0m0.139s 00:06:15.028 user 0m0.057s 00:06:15.028 sys 0m0.091s 00:06:15.028 07:20:47 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.028 07:20:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:15.028 ************************************ 00:06:15.028 END TEST rpc_client 00:06:15.028 ************************************ 00:06:15.028 07:20:47 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:15.028 07:20:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.028 07:20:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.028 07:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:15.028 ************************************ 00:06:15.028 START TEST json_config 00:06:15.028 ************************************ 00:06:15.028 07:20:47 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:15.028 07:20:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.028 07:20:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.287 07:20:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.287 07:20:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.287 07:20:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.287 07:20:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.287 07:20:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.287 07:20:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.287 07:20:47 json_config -- paths/export.sh@5 -- # export PATH 00:06:15.287 07:20:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@47 -- # : 0 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.287 07:20:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:15.287 INFO: JSON configuration test init 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.287 07:20:47 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:15.287 07:20:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:15.287 07:20:47 json_config -- json_config/common.sh@10 -- # shift 00:06:15.287 07:20:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:15.287 07:20:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:15.287 07:20:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:15.287 07:20:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.287 07:20:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.287 07:20:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61358 00:06:15.287 Waiting for target to run... 00:06:15.287 07:20:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:15.287 07:20:47 json_config -- json_config/common.sh@25 -- # waitforlisten 61358 /var/tmp/spdk_tgt.sock 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 61358 ']' 00:06:15.287 07:20:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.287 07:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.287 [2024-07-25 07:20:47.863879] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:15.287 [2024-07-25 07:20:47.863958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61358 ] 00:06:15.546 [2024-07-25 07:20:48.217105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.806 [2024-07-25 07:20:48.302657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.065 07:20:48 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.065 07:20:48 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:16.065 00:06:16.065 07:20:48 json_config -- json_config/common.sh@26 -- # echo '' 00:06:16.065 07:20:48 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:16.065 07:20:48 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:16.065 07:20:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.065 07:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 07:20:48 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:16.065 07:20:48 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:16.065 07:20:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.065 07:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.324 07:20:48 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:16.324 07:20:48 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:16.324 07:20:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:16.583 07:20:49 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:16.583 07:20:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:16.583 07:20:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.583 07:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.583 07:20:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:16.583 07:20:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:16.583 07:20:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:16.583 07:20:49 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:16.583 07:20:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:16.583 07:20:49 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@51 -- # sort 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:16.843 07:20:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.843 07:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:16.843 07:20:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.843 07:20:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:16.843 07:20:49 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.843 07:20:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.102 MallocForNvmf0 00:06:17.102 07:20:49 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.102 07:20:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.361 MallocForNvmf1 00:06:17.361 07:20:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.361 07:20:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.620 [2024-07-25 07:20:50.209700] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.620 07:20:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.620 07:20:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.879 07:20:50 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:17.879 07:20:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.137 07:20:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.137 07:20:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.395 07:20:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.395 07:20:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.395 [2024-07-25 07:20:51.088442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:18.395 07:20:51 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:18.395 07:20:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.395 07:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.653 07:20:51 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:18.653 07:20:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.653 07:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.653 07:20:51 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:18.653 07:20:51 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.653 07:20:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.912 MallocBdevForConfigChangeCheck 00:06:18.912 07:20:51 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:18.912 07:20:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.912 07:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.912 07:20:51 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:18.912 07:20:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.172 INFO: shutting down applications... 00:06:19.172 07:20:51 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:19.172 07:20:51 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:19.172 07:20:51 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:19.172 07:20:51 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:19.172 07:20:51 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:19.431 Calling clear_iscsi_subsystem 00:06:19.431 Calling clear_nvmf_subsystem 00:06:19.431 Calling clear_nbd_subsystem 00:06:19.431 Calling clear_ublk_subsystem 00:06:19.431 Calling clear_vhost_blk_subsystem 00:06:19.431 Calling clear_vhost_scsi_subsystem 00:06:19.431 Calling clear_bdev_subsystem 00:06:19.690 07:20:52 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:19.690 07:20:52 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:19.690 07:20:52 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:19.690 07:20:52 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:19.690 07:20:52 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.690 07:20:52 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:19.950 07:20:52 json_config -- json_config/json_config.sh@349 -- # break 00:06:19.950 07:20:52 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:19.950 07:20:52 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:19.950 07:20:52 json_config -- json_config/common.sh@31 -- # local app=target 00:06:19.950 07:20:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.950 07:20:52 json_config -- json_config/common.sh@35 -- # [[ -n 61358 ]] 00:06:19.950 07:20:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61358 00:06:19.950 07:20:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.950 07:20:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.950 07:20:52 json_config -- json_config/common.sh@41 -- # kill -0 61358 00:06:19.950 07:20:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.519 07:20:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.519 07:20:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.519 07:20:53 json_config -- json_config/common.sh@41 -- # kill -0 61358 00:06:20.519 07:20:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.519 07:20:53 json_config -- json_config/common.sh@43 -- # break 00:06:20.519 07:20:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.519 SPDK target shutdown done 00:06:20.519 07:20:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.519 INFO: relaunching applications... 00:06:20.519 07:20:53 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:20.519 07:20:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:20.519 07:20:53 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.519 07:20:53 json_config -- json_config/common.sh@10 -- # shift 00:06:20.519 07:20:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.519 07:20:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.519 07:20:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.519 07:20:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.519 07:20:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.519 Waiting for target to run... 00:06:20.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.519 07:20:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61632 00:06:20.519 07:20:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:20.519 07:20:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.519 07:20:53 json_config -- json_config/common.sh@25 -- # waitforlisten 61632 /var/tmp/spdk_tgt.sock 00:06:20.519 07:20:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 61632 ']' 00:06:20.519 07:20:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.519 07:20:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.519 07:20:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.519 07:20:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.519 07:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.519 [2024-07-25 07:20:53.130755] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:20.519 [2024-07-25 07:20:53.130819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61632 ] 00:06:20.785 [2024-07-25 07:20:53.481326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.046 [2024-07-25 07:20:53.567429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.305 [2024-07-25 07:20:53.884779] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.305 [2024-07-25 07:20:53.916772] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:21.305 00:06:21.305 INFO: Checking if target configuration is the same... 00:06:21.305 07:20:54 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.305 07:20:54 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:21.305 07:20:54 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.305 07:20:54 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:21.305 07:20:54 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:21.305 07:20:54 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:21.305 07:20:54 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:21.305 07:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.305 + '[' 2 -ne 2 ']' 00:06:21.305 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:21.305 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:21.305 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:21.305 +++ basename /dev/fd/62 00:06:21.565 ++ mktemp /tmp/62.XXX 00:06:21.565 + tmp_file_1=/tmp/62.8qe 00:06:21.565 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:21.565 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:21.565 + tmp_file_2=/tmp/spdk_tgt_config.json.H7c 00:06:21.565 + ret=0 00:06:21.565 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:21.824 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:21.824 + diff -u /tmp/62.8qe /tmp/spdk_tgt_config.json.H7c 00:06:21.824 INFO: JSON config files are the same 00:06:21.824 + echo 'INFO: JSON config files are the same' 00:06:21.824 + rm /tmp/62.8qe /tmp/spdk_tgt_config.json.H7c 00:06:21.824 + exit 0 00:06:21.824 INFO: changing configuration and checking if this can be detected... 00:06:21.824 07:20:54 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:21.824 07:20:54 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:21.824 07:20:54 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:21.824 07:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.083 07:20:54 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.083 07:20:54 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:22.083 07:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.083 + '[' 2 -ne 2 ']' 00:06:22.083 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:22.083 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:22.083 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:22.083 +++ basename /dev/fd/62 00:06:22.083 ++ mktemp /tmp/62.XXX 00:06:22.083 + tmp_file_1=/tmp/62.inV 00:06:22.083 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.083 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.083 + tmp_file_2=/tmp/spdk_tgt_config.json.eRx 00:06:22.083 + ret=0 00:06:22.083 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:22.342 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:22.601 + diff -u /tmp/62.inV /tmp/spdk_tgt_config.json.eRx 00:06:22.601 + ret=1 00:06:22.602 + echo '=== Start of file: /tmp/62.inV ===' 00:06:22.602 + cat /tmp/62.inV 00:06:22.602 + echo '=== End of file: /tmp/62.inV ===' 00:06:22.602 + echo '' 00:06:22.602 + echo '=== Start of file: /tmp/spdk_tgt_config.json.eRx ===' 00:06:22.602 + cat /tmp/spdk_tgt_config.json.eRx 00:06:22.602 + echo '=== End of file: /tmp/spdk_tgt_config.json.eRx ===' 00:06:22.602 + echo '' 00:06:22.602 + rm /tmp/62.inV /tmp/spdk_tgt_config.json.eRx 00:06:22.602 + exit 1 00:06:22.602 INFO: configuration change detected. 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@321 -- # [[ -n 61632 ]] 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.602 07:20:55 json_config -- json_config/json_config.sh@327 -- # killprocess 61632 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@948 -- # '[' -z 61632 ']' 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@952 -- # kill -0 61632 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@953 -- # uname 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61632 00:06:22.602 killing process with pid 61632 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61632' 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@967 -- # kill 61632 00:06:22.602 07:20:55 json_config -- common/autotest_common.sh@972 -- # wait 61632 00:06:22.860 07:20:55 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.860 07:20:55 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:22.860 07:20:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.860 07:20:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.860 INFO: Success 00:06:22.860 07:20:55 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:22.860 07:20:55 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:22.860 ************************************ 00:06:22.860 END TEST json_config 00:06:22.860 ************************************ 00:06:22.860 00:06:22.860 real 0m7.849s 00:06:22.860 user 0m11.017s 00:06:22.860 sys 0m1.832s 00:06:22.860 07:20:55 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.860 07:20:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.860 07:20:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:22.860 07:20:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.860 07:20:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.860 07:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:22.860 ************************************ 00:06:22.860 START TEST json_config_extra_key 00:06:22.860 ************************************ 00:06:22.860 07:20:55 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.120 07:20:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.120 07:20:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.120 07:20:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.120 07:20:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 07:20:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 07:20:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 07:20:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:23.120 07:20:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.120 07:20:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:23.120 INFO: launching applications... 00:06:23.120 07:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61808 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:23.120 07:20:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.120 Waiting for target to run... 00:06:23.121 07:20:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61808 /var/tmp/spdk_tgt.sock 00:06:23.121 07:20:55 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61808 ']' 00:06:23.121 07:20:55 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.121 07:20:55 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.121 07:20:55 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.121 07:20:55 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.121 07:20:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.121 [2024-07-25 07:20:55.765841] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:23.121 [2024-07-25 07:20:55.766032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61808 ] 00:06:23.689 [2024-07-25 07:20:56.142031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.689 [2024-07-25 07:20:56.261736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.947 07:20:56 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.947 07:20:56 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:23.947 00:06:23.947 INFO: shutting down applications... 00:06:23.947 07:20:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:23.947 07:20:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:23.947 07:20:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:23.947 07:20:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:23.947 07:20:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:23.947 07:20:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61808 ]] 00:06:23.947 07:20:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61808 00:06:23.948 07:20:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:23.948 07:20:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.948 07:20:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61808 00:06:23.948 07:20:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.516 07:20:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.516 07:20:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.516 07:20:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61808 00:06:24.516 07:20:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:24.516 07:20:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:24.516 07:20:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:24.516 07:20:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:24.516 SPDK target shutdown done 00:06:24.516 07:20:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:24.516 Success 00:06:24.516 00:06:24.516 real 0m1.576s 00:06:24.516 user 0m1.356s 00:06:24.516 sys 0m0.392s 00:06:24.516 07:20:57 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.516 07:20:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 ************************************ 00:06:24.516 END TEST json_config_extra_key 00:06:24.516 ************************************ 00:06:24.516 07:20:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.516 07:20:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.516 07:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.516 07:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 ************************************ 00:06:24.516 START TEST alias_rpc 00:06:24.516 ************************************ 00:06:24.516 07:20:57 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.775 * Looking for test storage... 00:06:24.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:24.775 07:20:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.775 07:20:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.775 07:20:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61879 00:06:24.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.775 07:20:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61879 00:06:24.775 07:20:57 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61879 ']' 00:06:24.775 07:20:57 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.775 07:20:57 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.775 07:20:57 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.775 07:20:57 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.775 07:20:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.775 [2024-07-25 07:20:57.397268] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:24.775 [2024-07-25 07:20:57.397384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61879 ] 00:06:25.033 [2024-07-25 07:20:57.543591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.033 [2024-07-25 07:20:57.647672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.600 07:20:58 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.600 07:20:58 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.600 07:20:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:25.858 07:20:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61879 00:06:25.858 07:20:58 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61879 ']' 00:06:25.858 07:20:58 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61879 00:06:25.858 07:20:58 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:25.858 07:20:58 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.858 07:20:58 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61879 00:06:26.118 killing process with pid 61879 00:06:26.118 07:20:58 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.118 07:20:58 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.118 07:20:58 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61879' 00:06:26.118 07:20:58 alias_rpc -- common/autotest_common.sh@967 -- # kill 61879 00:06:26.118 07:20:58 alias_rpc -- common/autotest_common.sh@972 -- # wait 61879 00:06:26.377 ************************************ 00:06:26.377 END TEST alias_rpc 00:06:26.377 ************************************ 00:06:26.377 00:06:26.377 real 0m1.703s 00:06:26.377 user 0m1.889s 00:06:26.377 sys 0m0.425s 00:06:26.377 07:20:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.377 07:20:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.377 07:20:58 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:06:26.377 07:20:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.377 07:20:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.377 07:20:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.377 07:20:58 -- common/autotest_common.sh@10 -- # set +x 00:06:26.377 ************************************ 00:06:26.377 START TEST dpdk_mem_utility 00:06:26.377 ************************************ 00:06:26.377 07:20:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.377 * Looking for test storage... 00:06:26.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:26.377 07:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:26.377 07:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61971 00:06:26.377 07:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61971 00:06:26.377 07:20:59 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61971 ']' 00:06:26.377 07:20:59 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.377 07:20:59 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.377 07:20:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.377 07:20:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.377 07:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.377 07:20:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.636 [2024-07-25 07:20:59.135581] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:26.636 [2024-07-25 07:20:59.135653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61971 ] 00:06:26.636 [2024-07-25 07:20:59.274931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.896 [2024-07-25 07:20:59.380139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.466 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.466 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:27.466 07:21:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:27.466 07:21:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:27.466 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.466 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.466 { 00:06:27.466 "filename": "/tmp/spdk_mem_dump.txt" 00:06:27.466 } 00:06:27.466 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.466 07:21:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:27.466 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:27.466 1 heaps totaling size 814.000000 MiB 00:06:27.466 size: 814.000000 MiB heap id: 0 00:06:27.466 end heaps---------- 00:06:27.466 8 mempools totaling size 598.116089 MiB 00:06:27.466 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:27.466 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:27.466 size: 84.521057 MiB name: bdev_io_61971 00:06:27.466 size: 51.011292 MiB name: evtpool_61971 00:06:27.466 size: 50.003479 MiB name: msgpool_61971 00:06:27.466 size: 21.763794 MiB name: PDU_Pool 00:06:27.466 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:27.466 size: 0.026123 MiB name: Session_Pool 00:06:27.466 end mempools------- 00:06:27.466 6 memzones totaling size 4.142822 MiB 00:06:27.466 size: 1.000366 MiB name: RG_ring_0_61971 00:06:27.466 size: 1.000366 MiB name: RG_ring_1_61971 00:06:27.466 size: 1.000366 MiB name: RG_ring_4_61971 00:06:27.466 size: 1.000366 MiB name: RG_ring_5_61971 00:06:27.466 size: 0.125366 MiB name: RG_ring_2_61971 00:06:27.466 size: 0.015991 MiB name: RG_ring_3_61971 00:06:27.466 end memzones------- 00:06:27.466 07:21:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:27.466 heap id: 0 total size: 814.000000 MiB number of busy elements: 223 number of free elements: 15 00:06:27.466 list of free elements. size: 12.486023 MiB 00:06:27.466 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:27.466 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:27.466 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:27.466 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:27.466 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:27.466 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:27.466 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:27.466 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:27.467 element at address: 0x200000200000 with size: 0.837036 MiB 00:06:27.467 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:06:27.467 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:27.467 element at address: 0x200000800000 with size: 0.487061 MiB 00:06:27.467 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:27.467 element at address: 0x200027e00000 with size: 0.398315 MiB 00:06:27.467 element at address: 0x200003a00000 with size: 0.350769 MiB 00:06:27.467 list of standard malloc elements. size: 199.251404 MiB 00:06:27.467 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:27.467 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:27.467 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:27.467 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:27.467 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:27.467 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:27.467 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:27.467 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:27.467 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:27.467 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:27.467 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:27.468 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e66040 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:27.468 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:27.468 list of memzone associated elements. size: 602.262573 MiB 00:06:27.468 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:27.468 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:27.468 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:27.468 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:27.468 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:27.468 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61971_0 00:06:27.468 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:27.468 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61971_0 00:06:27.468 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:27.468 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61971_0 00:06:27.468 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:27.468 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:27.468 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:27.468 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:27.468 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:27.468 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61971 00:06:27.468 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:27.468 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61971 00:06:27.468 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:27.468 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61971 00:06:27.468 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:27.468 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:27.468 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:27.468 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:27.468 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:27.468 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:27.468 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:27.468 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:27.468 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:27.468 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61971 00:06:27.468 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:27.468 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61971 00:06:27.468 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:27.468 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61971 00:06:27.468 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:27.468 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61971 00:06:27.468 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:27.468 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61971 00:06:27.468 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:27.468 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:27.468 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:27.468 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:27.468 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:27.468 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:27.468 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:27.468 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61971 00:06:27.468 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:27.468 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:27.468 element at address: 0x200027e66100 with size: 0.023743 MiB 00:06:27.468 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:27.468 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:27.468 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61971 00:06:27.468 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:06:27.468 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:27.468 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:27.468 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61971 00:06:27.468 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:27.468 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61971 00:06:27.468 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:06:27.469 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:27.469 07:21:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:27.469 07:21:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61971 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61971 ']' 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61971 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61971 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61971' 00:06:27.469 killing process with pid 61971 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61971 00:06:27.469 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61971 00:06:28.054 00:06:28.054 real 0m1.516s 00:06:28.054 user 0m1.586s 00:06:28.054 sys 0m0.382s 00:06:28.054 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.054 07:21:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.054 ************************************ 00:06:28.054 END TEST dpdk_mem_utility 00:06:28.054 ************************************ 00:06:28.054 07:21:00 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.054 07:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.054 07:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.054 07:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:28.054 ************************************ 00:06:28.054 START TEST event 00:06:28.054 ************************************ 00:06:28.054 07:21:00 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.055 * Looking for test storage... 00:06:28.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:28.055 07:21:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.055 07:21:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.055 07:21:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.055 07:21:00 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:28.055 07:21:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.055 07:21:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.055 ************************************ 00:06:28.055 START TEST event_perf 00:06:28.055 ************************************ 00:06:28.055 07:21:00 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.055 Running I/O for 1 seconds...[2024-07-25 07:21:00.687929] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:28.055 [2024-07-25 07:21:00.688037] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62060 ] 00:06:28.314 [2024-07-25 07:21:00.831092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.314 [2024-07-25 07:21:00.947474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.314 [2024-07-25 07:21:00.947672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.314 [2024-07-25 07:21:00.947881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.314 Running I/O for 1 seconds...[2024-07-25 07:21:00.947884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.693 00:06:29.693 lcore 0: 192411 00:06:29.693 lcore 1: 192410 00:06:29.693 lcore 2: 192410 00:06:29.693 lcore 3: 192410 00:06:29.693 done. 00:06:29.693 00:06:29.693 real 0m1.369s 00:06:29.693 user 0m4.180s 00:06:29.693 sys 0m0.066s 00:06:29.693 07:21:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.693 07:21:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 ************************************ 00:06:29.693 END TEST event_perf 00:06:29.693 ************************************ 00:06:29.693 07:21:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.693 07:21:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:29.693 07:21:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.693 07:21:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.694 ************************************ 00:06:29.694 START TEST event_reactor 00:06:29.694 ************************************ 00:06:29.694 07:21:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.694 [2024-07-25 07:21:02.114964] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:29.694 [2024-07-25 07:21:02.115155] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62099 ] 00:06:29.694 [2024-07-25 07:21:02.257782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.694 [2024-07-25 07:21:02.359628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.076 test_start 00:06:31.076 oneshot 00:06:31.076 tick 100 00:06:31.076 tick 100 00:06:31.076 tick 250 00:06:31.076 tick 100 00:06:31.076 tick 100 00:06:31.076 tick 100 00:06:31.076 tick 250 00:06:31.076 tick 500 00:06:31.076 tick 100 00:06:31.076 tick 100 00:06:31.076 tick 250 00:06:31.076 tick 100 00:06:31.076 tick 100 00:06:31.076 test_end 00:06:31.076 00:06:31.076 real 0m1.349s 00:06:31.076 user 0m1.188s 00:06:31.076 sys 0m0.055s 00:06:31.076 07:21:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.076 ************************************ 00:06:31.076 END TEST event_reactor 00:06:31.076 ************************************ 00:06:31.076 07:21:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 07:21:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.076 07:21:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.076 07:21:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.076 07:21:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.076 ************************************ 00:06:31.076 START TEST event_reactor_perf 00:06:31.076 ************************************ 00:06:31.076 07:21:03 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.076 [2024-07-25 07:21:03.519593] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:31.076 [2024-07-25 07:21:03.519810] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 00:06:31.076 [2024-07-25 07:21:03.659978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.076 [2024-07-25 07:21:03.767356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.465 test_start 00:06:32.465 test_end 00:06:32.465 Performance: 396013 events per second 00:06:32.465 00:06:32.465 real 0m1.354s 00:06:32.465 user 0m1.191s 00:06:32.465 sys 0m0.055s 00:06:32.465 07:21:04 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.465 ************************************ 00:06:32.465 END TEST event_reactor_perf 00:06:32.465 ************************************ 00:06:32.465 07:21:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.465 07:21:04 event -- event/event.sh@49 -- # uname -s 00:06:32.465 07:21:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:32.465 07:21:04 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.465 07:21:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.465 07:21:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.465 07:21:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.465 ************************************ 00:06:32.465 START TEST event_scheduler 00:06:32.465 ************************************ 00:06:32.465 07:21:04 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.465 * Looking for test storage... 00:06:32.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:32.465 07:21:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:32.465 07:21:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62196 00:06:32.465 07:21:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:32.465 07:21:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.465 07:21:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62196 00:06:32.465 07:21:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62196 ']' 00:06:32.465 07:21:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.465 07:21:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.465 07:21:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.465 07:21:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.465 07:21:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.465 [2024-07-25 07:21:05.088323] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:32.465 [2024-07-25 07:21:05.088833] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62196 ] 00:06:32.723 [2024-07-25 07:21:05.226678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.723 [2024-07-25 07:21:05.334856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.723 [2024-07-25 07:21:05.334907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.723 [2024-07-25 07:21:05.335045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.723 [2024-07-25 07:21:05.335047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.290 07:21:06 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.290 07:21:06 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:33.290 07:21:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:33.290 07:21:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.561 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.561 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.561 POWER: Cannot set governor of lcore 0 to performance 00:06:33.561 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.561 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.561 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.561 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.561 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:33.561 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:33.561 POWER: Unable to set Power Management Environment for lcore 0 00:06:33.561 [2024-07-25 07:21:06.039481] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:33.561 [2024-07-25 07:21:06.039542] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:33.561 [2024-07-25 07:21:06.039587] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:33.561 [2024-07-25 07:21:06.039630] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:33.561 [2024-07-25 07:21:06.039678] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:33.561 [2024-07-25 07:21:06.039703] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 [2024-07-25 07:21:06.119875] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 ************************************ 00:06:33.561 START TEST scheduler_create_thread 00:06:33.561 ************************************ 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 2 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 3 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 4 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 5 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 6 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 7 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 8 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 9 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 10 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.561 07:21:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.946 07:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.946 07:21:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.946 07:21:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.946 07:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.946 07:21:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.882 07:21:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.882 07:21:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:35.882 07:21:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.882 07:21:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.819 07:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.819 07:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.819 07:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.819 07:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.819 07:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ************************************ 00:06:37.386 END TEST scheduler_create_thread 00:06:37.386 ************************************ 00:06:37.386 07:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.386 00:06:37.386 real 0m3.884s 00:06:37.386 user 0m0.027s 00:06:37.386 sys 0m0.007s 00:06:37.386 07:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.386 07:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 07:21:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:37.386 07:21:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62196 00:06:37.386 07:21:10 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62196 ']' 00:06:37.386 07:21:10 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62196 00:06:37.386 07:21:10 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:37.387 07:21:10 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.387 07:21:10 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62196 00:06:37.387 07:21:10 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:37.387 07:21:10 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:37.387 07:21:10 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62196' 00:06:37.387 killing process with pid 62196 00:06:37.387 07:21:10 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62196 00:06:37.387 07:21:10 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62196 00:06:37.954 [2024-07-25 07:21:10.398161] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.954 00:06:37.954 real 0m5.770s 00:06:37.954 user 0m12.817s 00:06:37.954 sys 0m0.370s 00:06:37.954 07:21:10 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.213 07:21:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.213 ************************************ 00:06:38.213 END TEST event_scheduler 00:06:38.213 ************************************ 00:06:38.213 07:21:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:38.213 07:21:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:38.213 07:21:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.213 07:21:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.213 07:21:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.213 ************************************ 00:06:38.213 START TEST app_repeat 00:06:38.213 ************************************ 00:06:38.213 07:21:10 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62319 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.213 Process app_repeat pid: 62319 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62319' 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.213 spdk_app_start Round 0 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:38.213 07:21:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62319 /var/tmp/spdk-nbd.sock 00:06:38.213 07:21:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62319 ']' 00:06:38.213 07:21:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.213 07:21:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.213 07:21:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.213 07:21:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.213 07:21:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.213 [2024-07-25 07:21:10.776687] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:38.213 [2024-07-25 07:21:10.776770] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:06:38.213 [2024-07-25 07:21:10.917356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.471 [2024-07-25 07:21:11.025409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.471 [2024-07-25 07:21:11.025410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.035 07:21:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.035 07:21:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:39.035 07:21:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.292 Malloc0 00:06:39.292 07:21:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.549 Malloc1 00:06:39.549 07:21:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.549 07:21:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.808 /dev/nbd0 00:06:39.808 07:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.808 07:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.808 1+0 records in 00:06:39.808 1+0 records out 00:06:39.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035007 s, 11.7 MB/s 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.808 07:21:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.808 07:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.808 07:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.808 07:21:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.067 /dev/nbd1 00:06:40.067 07:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.067 07:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.067 1+0 records in 00:06:40.067 1+0 records out 00:06:40.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417219 s, 9.8 MB/s 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.067 07:21:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.067 07:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.067 07:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.067 07:21:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.067 07:21:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.067 07:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.325 07:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.325 { 00:06:40.325 "bdev_name": "Malloc0", 00:06:40.325 "nbd_device": "/dev/nbd0" 00:06:40.325 }, 00:06:40.325 { 00:06:40.325 "bdev_name": "Malloc1", 00:06:40.325 "nbd_device": "/dev/nbd1" 00:06:40.325 } 00:06:40.325 ]' 00:06:40.325 07:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.325 { 00:06:40.325 "bdev_name": "Malloc0", 00:06:40.325 "nbd_device": "/dev/nbd0" 00:06:40.325 }, 00:06:40.325 { 00:06:40.325 "bdev_name": "Malloc1", 00:06:40.325 "nbd_device": "/dev/nbd1" 00:06:40.325 } 00:06:40.325 ]' 00:06:40.325 07:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.325 /dev/nbd1' 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.325 /dev/nbd1' 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.325 256+0 records in 00:06:40.325 256+0 records out 00:06:40.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062411 s, 168 MB/s 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.325 07:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.583 256+0 records in 00:06:40.583 256+0 records out 00:06:40.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223995 s, 46.8 MB/s 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.583 256+0 records in 00:06:40.583 256+0 records out 00:06:40.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184476 s, 56.8 MB/s 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.583 07:21:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.840 07:21:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.096 07:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.353 07:21:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.353 07:21:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.610 07:21:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.868 [2024-07-25 07:21:14.364605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.868 [2024-07-25 07:21:14.459252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.868 [2024-07-25 07:21:14.459252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.868 [2024-07-25 07:21:14.501377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.868 [2024-07-25 07:21:14.501425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.148 07:21:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.148 spdk_app_start Round 1 00:06:45.148 07:21:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:45.148 07:21:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62319 /var/tmp/spdk-nbd.sock 00:06:45.148 07:21:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62319 ']' 00:06:45.148 07:21:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.148 07:21:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.148 07:21:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.148 07:21:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.149 07:21:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.149 07:21:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.149 07:21:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:45.149 07:21:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.149 Malloc0 00:06:45.149 07:21:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.149 Malloc1 00:06:45.149 07:21:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.149 07:21:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.149 07:21:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.149 07:21:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.149 07:21:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.149 07:21:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.407 07:21:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.407 /dev/nbd0 00:06:45.407 07:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.407 07:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.407 1+0 records in 00:06:45.407 1+0 records out 00:06:45.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414919 s, 9.9 MB/s 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.407 07:21:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:45.407 07:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.407 07:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.407 07:21:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.666 /dev/nbd1 00:06:45.666 07:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.666 07:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.666 1+0 records in 00:06:45.666 1+0 records out 00:06:45.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445808 s, 9.2 MB/s 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.666 07:21:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:45.666 07:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.666 07:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.666 07:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.666 07:21:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.666 07:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.925 { 00:06:45.925 "bdev_name": "Malloc0", 00:06:45.925 "nbd_device": "/dev/nbd0" 00:06:45.925 }, 00:06:45.925 { 00:06:45.925 "bdev_name": "Malloc1", 00:06:45.925 "nbd_device": "/dev/nbd1" 00:06:45.925 } 00:06:45.925 ]' 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.925 { 00:06:45.925 "bdev_name": "Malloc0", 00:06:45.925 "nbd_device": "/dev/nbd0" 00:06:45.925 }, 00:06:45.925 { 00:06:45.925 "bdev_name": "Malloc1", 00:06:45.925 "nbd_device": "/dev/nbd1" 00:06:45.925 } 00:06:45.925 ]' 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.925 /dev/nbd1' 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.925 /dev/nbd1' 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.925 256+0 records in 00:06:45.925 256+0 records out 00:06:45.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643944 s, 163 MB/s 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.925 07:21:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.184 256+0 records in 00:06:46.184 256+0 records out 00:06:46.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207082 s, 50.6 MB/s 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.184 256+0 records in 00:06:46.184 256+0 records out 00:06:46.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220591 s, 47.5 MB/s 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.184 07:21:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.443 07:21:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.443 07:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.443 07:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.443 07:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.443 07:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.702 07:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.960 07:21:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.960 07:21:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.960 07:21:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.219 [2024-07-25 07:21:19.853089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.219 [2024-07-25 07:21:19.946605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.219 [2024-07-25 07:21:19.946606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.478 [2024-07-25 07:21:19.988553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.478 [2024-07-25 07:21:19.988602] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.008 spdk_app_start Round 2 00:06:50.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.008 07:21:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.008 07:21:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.008 07:21:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62319 /var/tmp/spdk-nbd.sock 00:06:50.008 07:21:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62319 ']' 00:06:50.008 07:21:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.008 07:21:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.008 07:21:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.008 07:21:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.008 07:21:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.266 07:21:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.266 07:21:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:50.266 07:21:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.524 Malloc0 00:06:50.524 07:21:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.783 Malloc1 00:06:50.783 07:21:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.783 07:21:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.041 /dev/nbd0 00:06:51.041 07:21:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.041 07:21:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.041 1+0 records in 00:06:51.041 1+0 records out 00:06:51.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271255 s, 15.1 MB/s 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.041 07:21:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.041 07:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.041 07:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.041 07:21:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.300 /dev/nbd1 00:06:51.300 07:21:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.300 07:21:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.300 1+0 records in 00:06:51.300 1+0 records out 00:06:51.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224597 s, 18.2 MB/s 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.300 07:21:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.300 07:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.300 07:21:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.300 07:21:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.300 07:21:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.300 07:21:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.559 07:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.559 { 00:06:51.559 "bdev_name": "Malloc0", 00:06:51.559 "nbd_device": "/dev/nbd0" 00:06:51.559 }, 00:06:51.559 { 00:06:51.559 "bdev_name": "Malloc1", 00:06:51.559 "nbd_device": "/dev/nbd1" 00:06:51.559 } 00:06:51.559 ]' 00:06:51.559 07:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.559 { 00:06:51.559 "bdev_name": "Malloc0", 00:06:51.559 "nbd_device": "/dev/nbd0" 00:06:51.559 }, 00:06:51.559 { 00:06:51.560 "bdev_name": "Malloc1", 00:06:51.560 "nbd_device": "/dev/nbd1" 00:06:51.560 } 00:06:51.560 ]' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.560 /dev/nbd1' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.560 /dev/nbd1' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.560 256+0 records in 00:06:51.560 256+0 records out 00:06:51.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477415 s, 220 MB/s 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.560 256+0 records in 00:06:51.560 256+0 records out 00:06:51.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260199 s, 40.3 MB/s 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.560 256+0 records in 00:06:51.560 256+0 records out 00:06:51.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212084 s, 49.4 MB/s 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.560 07:21:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.819 07:21:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.079 07:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.338 07:21:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.339 07:21:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.339 07:21:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.598 07:21:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.856 [2024-07-25 07:21:25.334480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.856 [2024-07-25 07:21:25.416397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.856 [2024-07-25 07:21:25.416404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.857 [2024-07-25 07:21:25.457359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.857 [2024-07-25 07:21:25.457410] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.146 07:21:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62319 /var/tmp/spdk-nbd.sock 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62319 ']' 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:56.146 07:21:28 event.app_repeat -- event/event.sh@39 -- # killprocess 62319 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62319 ']' 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62319 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62319 00:06:56.146 killing process with pid 62319 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62319' 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62319 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62319 00:06:56.146 spdk_app_start is called in Round 0. 00:06:56.146 Shutdown signal received, stop current app iteration 00:06:56.146 Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 reinitialization... 00:06:56.146 spdk_app_start is called in Round 1. 00:06:56.146 Shutdown signal received, stop current app iteration 00:06:56.146 Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 reinitialization... 00:06:56.146 spdk_app_start is called in Round 2. 00:06:56.146 Shutdown signal received, stop current app iteration 00:06:56.146 Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 reinitialization... 00:06:56.146 spdk_app_start is called in Round 3. 00:06:56.146 Shutdown signal received, stop current app iteration 00:06:56.146 07:21:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:56.146 07:21:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:56.146 00:06:56.146 real 0m17.867s 00:06:56.146 user 0m39.544s 00:06:56.146 sys 0m2.743s 00:06:56.146 07:21:28 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.147 ************************************ 00:06:56.147 END TEST app_repeat 00:06:56.147 07:21:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.147 ************************************ 00:06:56.147 07:21:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:56.147 07:21:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.147 07:21:28 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.147 07:21:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.147 07:21:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.147 ************************************ 00:06:56.147 START TEST cpu_locks 00:06:56.147 ************************************ 00:06:56.147 07:21:28 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.147 * Looking for test storage... 00:06:56.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:56.147 07:21:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.147 07:21:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.147 07:21:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.147 07:21:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.147 07:21:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.147 07:21:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.147 07:21:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.147 ************************************ 00:06:56.147 START TEST default_locks 00:06:56.147 ************************************ 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62934 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62934 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62934 ']' 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.147 07:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.147 [2024-07-25 07:21:28.871307] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:56.147 [2024-07-25 07:21:28.871387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62934 ] 00:06:56.405 [2024-07-25 07:21:29.007635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.405 [2024-07-25 07:21:29.100036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62934 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62934 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62934 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62934 ']' 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62934 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62934 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.343 killing process with pid 62934 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62934' 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62934 00:06:57.343 07:21:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62934 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62934 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62934 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62934 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62934 ']' 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.602 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62934) - No such process 00:06:57.602 ERROR: process (pid: 62934) is no longer running 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.602 00:06:57.602 real 0m1.494s 00:06:57.602 user 0m1.543s 00:06:57.602 sys 0m0.431s 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.602 07:21:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.602 ************************************ 00:06:57.602 END TEST default_locks 00:06:57.602 ************************************ 00:06:57.862 07:21:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:57.862 07:21:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.862 07:21:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.862 07:21:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.862 ************************************ 00:06:57.862 START TEST default_locks_via_rpc 00:06:57.862 ************************************ 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62992 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62992 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62992 ']' 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.862 07:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.862 [2024-07-25 07:21:30.418799] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:57.862 [2024-07-25 07:21:30.418870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62992 ] 00:06:57.862 [2024-07-25 07:21:30.554871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.127 [2024-07-25 07:21:30.648159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62992 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62992 00:06:58.718 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62992 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62992 ']' 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62992 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62992 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.285 killing process with pid 62992 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62992' 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62992 00:06:59.285 07:21:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62992 00:06:59.544 00:06:59.544 real 0m1.740s 00:06:59.544 user 0m1.810s 00:06:59.544 sys 0m0.525s 00:06:59.544 07:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.544 07:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.544 ************************************ 00:06:59.544 END TEST default_locks_via_rpc 00:06:59.544 ************************************ 00:06:59.544 07:21:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.544 07:21:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.544 07:21:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.544 07:21:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.544 ************************************ 00:06:59.544 START TEST non_locking_app_on_locked_coremask 00:06:59.544 ************************************ 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63056 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63056 /var/tmp/spdk.sock 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63056 ']' 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.544 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.545 07:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.545 [2024-07-25 07:21:32.225962] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:06:59.545 [2024-07-25 07:21:32.226043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63056 ] 00:06:59.804 [2024-07-25 07:21:32.363853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.804 [2024-07-25 07:21:32.469261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63084 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63084 /var/tmp/spdk2.sock 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63084 ']' 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.371 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 [2024-07-25 07:21:33.139449] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:00.630 [2024-07-25 07:21:33.139518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63084 ] 00:07:00.630 [2024-07-25 07:21:33.268197] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.630 [2024-07-25 07:21:33.268235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.888 [2024-07-25 07:21:33.458016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.455 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.455 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.455 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63056 00:07:01.455 07:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63056 00:07:01.455 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63056 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63056 ']' 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63056 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63056 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63056' 00:07:01.714 killing process with pid 63056 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63056 00:07:01.714 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63056 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63084 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63084 ']' 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63084 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63084 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.282 killing process with pid 63084 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63084' 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63084 00:07:02.282 07:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63084 00:07:02.866 00:07:02.866 real 0m3.137s 00:07:02.866 user 0m3.389s 00:07:02.866 sys 0m0.823s 00:07:02.866 07:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.866 07:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 ************************************ 00:07:02.866 END TEST non_locking_app_on_locked_coremask 00:07:02.866 ************************************ 00:07:02.866 07:21:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:02.866 07:21:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.866 07:21:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.866 07:21:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 ************************************ 00:07:02.866 START TEST locking_app_on_unlocked_coremask 00:07:02.866 ************************************ 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63153 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63153 /var/tmp/spdk.sock 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63153 ']' 00:07:02.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.866 07:21:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 [2024-07-25 07:21:35.435912] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:02.866 [2024-07-25 07:21:35.436002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63153 ] 00:07:02.866 [2024-07-25 07:21:35.574558] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.866 [2024-07-25 07:21:35.574627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.124 [2024-07-25 07:21:35.679246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63180 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63180 /var/tmp/spdk2.sock 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63180 ']' 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.690 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.691 07:21:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.691 [2024-07-25 07:21:36.331112] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:03.691 [2024-07-25 07:21:36.331202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63180 ] 00:07:03.949 [2024-07-25 07:21:36.461709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.949 [2024-07-25 07:21:36.662755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.515 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.515 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:04.515 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63180 00:07:04.515 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63180 00:07:04.515 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63153 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63153 ']' 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63153 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63153 00:07:05.081 killing process with pid 63153 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63153' 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63153 00:07:05.081 07:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63153 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63180 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63180 ']' 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63180 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63180 00:07:05.648 killing process with pid 63180 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63180' 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63180 00:07:05.648 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63180 00:07:05.906 ************************************ 00:07:05.906 END TEST locking_app_on_unlocked_coremask 00:07:05.906 ************************************ 00:07:05.906 00:07:05.907 real 0m3.273s 00:07:05.907 user 0m3.503s 00:07:05.907 sys 0m0.907s 00:07:05.907 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.907 07:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.164 07:21:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:06.164 07:21:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.164 07:21:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.164 07:21:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.164 ************************************ 00:07:06.164 START TEST locking_app_on_locked_coremask 00:07:06.164 ************************************ 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63259 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63259 /var/tmp/spdk.sock 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63259 ']' 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.164 07:21:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.164 [2024-07-25 07:21:38.755572] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:06.164 [2024-07-25 07:21:38.755638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63259 ] 00:07:06.164 [2024-07-25 07:21:38.876317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.423 [2024-07-25 07:21:38.975867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63286 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63286 /var/tmp/spdk2.sock 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63286 /var/tmp/spdk2.sock 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:06.990 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.991 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63286 /var/tmp/spdk2.sock 00:07:06.991 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63286 ']' 00:07:06.991 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.991 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.991 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.991 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.991 07:21:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.991 [2024-07-25 07:21:39.709906] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:06.991 [2024-07-25 07:21:39.709971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63286 ] 00:07:07.249 [2024-07-25 07:21:39.839430] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63259 has claimed it. 00:07:07.249 [2024-07-25 07:21:39.839489] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.816 ERROR: process (pid: 63286) is no longer running 00:07:07.816 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63286) - No such process 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63259 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63259 00:07:07.816 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.074 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63259 00:07:08.074 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63259 ']' 00:07:08.074 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63259 00:07:08.074 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:08.074 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.074 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63259 00:07:08.334 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.334 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.334 killing process with pid 63259 00:07:08.334 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63259' 00:07:08.334 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63259 00:07:08.334 07:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63259 00:07:08.593 00:07:08.593 real 0m2.451s 00:07:08.593 user 0m2.749s 00:07:08.593 sys 0m0.605s 00:07:08.593 07:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.593 07:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.593 ************************************ 00:07:08.593 END TEST locking_app_on_locked_coremask 00:07:08.593 ************************************ 00:07:08.593 07:21:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:08.593 07:21:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.593 07:21:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.593 07:21:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.593 ************************************ 00:07:08.593 START TEST locking_overlapped_coremask 00:07:08.593 ************************************ 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63333 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63333 /var/tmp/spdk.sock 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63333 ']' 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.593 07:21:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.594 07:21:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:08.594 07:21:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.594 [2024-07-25 07:21:41.260910] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:08.594 [2024-07-25 07:21:41.260978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63333 ] 00:07:08.853 [2024-07-25 07:21:41.385296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.853 [2024-07-25 07:21:41.490080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.853 [2024-07-25 07:21:41.490263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.853 [2024-07-25 07:21:41.490264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63363 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63363 /var/tmp/spdk2.sock 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63363 /var/tmp/spdk2.sock 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63363 /var/tmp/spdk2.sock 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63363 ']' 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.791 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.791 [2024-07-25 07:21:42.235582] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:09.791 [2024-07-25 07:21:42.235649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63363 ] 00:07:09.791 [2024-07-25 07:21:42.371516] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63333 has claimed it. 00:07:09.791 [2024-07-25 07:21:42.371580] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.361 ERROR: process (pid: 63363) is no longer running 00:07:10.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63363) - No such process 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63333 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63333 ']' 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63333 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63333 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.361 killing process with pid 63333 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63333' 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63333 00:07:10.361 07:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63333 00:07:10.621 00:07:10.621 real 0m2.071s 00:07:10.621 user 0m5.731s 00:07:10.621 sys 0m0.383s 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.621 ************************************ 00:07:10.621 END TEST locking_overlapped_coremask 00:07:10.621 ************************************ 00:07:10.621 07:21:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:10.621 07:21:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.621 07:21:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.621 07:21:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.621 ************************************ 00:07:10.621 START TEST locking_overlapped_coremask_via_rpc 00:07:10.621 ************************************ 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63409 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63409 /var/tmp/spdk.sock 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63409 ']' 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.621 07:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.880 [2024-07-25 07:21:43.391349] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:10.880 [2024-07-25 07:21:43.391417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63409 ] 00:07:10.880 [2024-07-25 07:21:43.530028] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.880 [2024-07-25 07:21:43.530074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.139 [2024-07-25 07:21:43.635656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.139 [2024-07-25 07:21:43.635835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.139 [2024-07-25 07:21:43.635837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63439 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63439 /var/tmp/spdk2.sock 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63439 ']' 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.706 07:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.706 [2024-07-25 07:21:44.329743] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:11.706 [2024-07-25 07:21:44.329818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63439 ] 00:07:11.966 [2024-07-25 07:21:44.461357] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.966 [2024-07-25 07:21:44.461406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.966 [2024-07-25 07:21:44.674799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.966 [2024-07-25 07:21:44.678215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.966 [2024-07-25 07:21:44.678219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.903 [2024-07-25 07:21:45.323243] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63409 has claimed it. 00:07:12.903 2024/07/25 07:21:45 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:12.903 request: 00:07:12.903 { 00:07:12.903 "method": "framework_enable_cpumask_locks", 00:07:12.903 "params": {} 00:07:12.903 } 00:07:12.903 Got JSON-RPC error response 00:07:12.903 GoRPCClient: error on JSON-RPC call 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63409 /var/tmp/spdk.sock 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63409 ']' 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63439 /var/tmp/spdk2.sock 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63439 ']' 00:07:12.903 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.904 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.904 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.904 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.904 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.163 00:07:13.163 real 0m2.474s 00:07:13.163 user 0m1.209s 00:07:13.163 sys 0m0.209s 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.163 07:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.163 ************************************ 00:07:13.163 END TEST locking_overlapped_coremask_via_rpc 00:07:13.163 ************************************ 00:07:13.163 07:21:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:13.163 07:21:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63409 ]] 00:07:13.163 07:21:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63409 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63409 ']' 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63409 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63409 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63409' 00:07:13.163 killing process with pid 63409 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63409 00:07:13.163 07:21:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63409 00:07:13.732 07:21:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63439 ]] 00:07:13.732 07:21:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63439 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63439 ']' 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63439 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63439 00:07:13.732 killing process with pid 63439 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63439' 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63439 00:07:13.732 07:21:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63439 00:07:13.991 07:21:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.991 07:21:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:13.991 07:21:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63409 ]] 00:07:13.991 07:21:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63409 00:07:13.991 07:21:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63409 ']' 00:07:13.991 07:21:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63409 00:07:13.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63409) - No such process 00:07:13.991 07:21:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63409 is not found' 00:07:13.991 Process with pid 63409 is not found 00:07:13.991 07:21:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63439 ]] 00:07:13.991 Process with pid 63439 is not found 00:07:13.991 07:21:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63439 00:07:13.991 07:21:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63439 ']' 00:07:13.991 07:21:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63439 00:07:13.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63439) - No such process 00:07:13.991 07:21:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63439 is not found' 00:07:13.991 07:21:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:14.249 00:07:14.249 real 0m18.050s 00:07:14.249 user 0m32.176s 00:07:14.249 sys 0m4.690s 00:07:14.249 07:21:46 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.249 07:21:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.249 ************************************ 00:07:14.249 END TEST cpu_locks 00:07:14.249 ************************************ 00:07:14.249 00:07:14.249 real 0m46.229s 00:07:14.249 user 1m31.262s 00:07:14.249 sys 0m8.294s 00:07:14.249 07:21:46 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.249 ************************************ 00:07:14.249 END TEST event 00:07:14.249 ************************************ 00:07:14.249 07:21:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.249 07:21:46 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:14.249 07:21:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.249 07:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.249 07:21:46 -- common/autotest_common.sh@10 -- # set +x 00:07:14.249 ************************************ 00:07:14.249 START TEST thread 00:07:14.249 ************************************ 00:07:14.249 07:21:46 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:14.249 * Looking for test storage... 00:07:14.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:14.249 07:21:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.249 07:21:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:14.249 07:21:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.249 07:21:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.249 ************************************ 00:07:14.249 START TEST thread_poller_perf 00:07:14.249 ************************************ 00:07:14.250 07:21:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.508 [2024-07-25 07:21:46.993918] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:14.508 [2024-07-25 07:21:46.994107] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:07:14.508 [2024-07-25 07:21:47.136513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.508 [2024-07-25 07:21:47.237254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.508 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:15.883 ====================================== 00:07:15.883 busy:2299426990 (cyc) 00:07:15.883 total_run_count: 384000 00:07:15.883 tsc_hz: 2290000000 (cyc) 00:07:15.883 ====================================== 00:07:15.883 poller_cost: 5988 (cyc), 2614 (nsec) 00:07:15.883 00:07:15.883 ************************************ 00:07:15.883 END TEST thread_poller_perf 00:07:15.883 ************************************ 00:07:15.883 real 0m1.351s 00:07:15.883 user 0m1.186s 00:07:15.883 sys 0m0.056s 00:07:15.883 07:21:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.883 07:21:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.883 07:21:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.883 07:21:48 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:15.883 07:21:48 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.883 07:21:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.883 ************************************ 00:07:15.883 START TEST thread_poller_perf 00:07:15.883 ************************************ 00:07:15.883 07:21:48 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.883 [2024-07-25 07:21:48.403792] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:15.883 [2024-07-25 07:21:48.404284] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63621 ] 00:07:15.883 [2024-07-25 07:21:48.545868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.142 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.142 [2024-07-25 07:21:48.651398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.077 ====================================== 00:07:17.077 busy:2291781858 (cyc) 00:07:17.077 total_run_count: 4819000 00:07:17.077 tsc_hz: 2290000000 (cyc) 00:07:17.077 ====================================== 00:07:17.077 poller_cost: 475 (cyc), 207 (nsec) 00:07:17.077 ************************************ 00:07:17.077 END TEST thread_poller_perf 00:07:17.077 ************************************ 00:07:17.077 00:07:17.077 real 0m1.350s 00:07:17.077 user 0m1.191s 00:07:17.077 sys 0m0.051s 00:07:17.077 07:21:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.077 07:21:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.077 07:21:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:17.077 ************************************ 00:07:17.077 END TEST thread 00:07:17.077 ************************************ 00:07:17.077 00:07:17.077 real 0m2.947s 00:07:17.077 user 0m2.463s 00:07:17.077 sys 0m0.276s 00:07:17.077 07:21:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.077 07:21:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.336 07:21:49 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:17.336 07:21:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.336 07:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.336 07:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:17.336 ************************************ 00:07:17.336 START TEST accel 00:07:17.336 ************************************ 00:07:17.336 07:21:49 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:17.336 * Looking for test storage... 00:07:17.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:17.336 07:21:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:17.336 07:21:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:17.336 07:21:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.336 07:21:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63696 00:07:17.336 07:21:49 accel -- accel/accel.sh@63 -- # waitforlisten 63696 00:07:17.336 07:21:49 accel -- common/autotest_common.sh@829 -- # '[' -z 63696 ']' 00:07:17.336 07:21:49 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:17.336 07:21:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:17.336 07:21:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.336 07:21:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.336 07:21:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.336 07:21:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.336 07:21:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.336 07:21:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.336 07:21:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.336 07:21:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.336 07:21:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.336 07:21:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.336 07:21:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:17.336 07:21:49 accel -- accel/accel.sh@41 -- # jq -r . 00:07:17.336 [2024-07-25 07:21:50.027660] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:17.336 [2024-07-25 07:21:50.027727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63696 ] 00:07:17.594 [2024-07-25 07:21:50.149871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.594 [2024-07-25 07:21:50.272228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.529 07:21:50 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.529 07:21:50 accel -- common/autotest_common.sh@862 -- # return 0 00:07:18.529 07:21:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:18.529 07:21:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:18.529 07:21:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:18.529 07:21:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:18.529 07:21:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:18.529 07:21:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:18.529 07:21:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:18.529 07:21:50 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.529 07:21:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.529 07:21:50 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:18.529 07:21:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:18.529 07:21:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:18.529 07:21:50 accel -- accel/accel.sh@75 -- # killprocess 63696 00:07:18.529 07:21:50 accel -- common/autotest_common.sh@948 -- # '[' -z 63696 ']' 00:07:18.530 07:21:50 accel -- common/autotest_common.sh@952 -- # kill -0 63696 00:07:18.530 07:21:50 accel -- common/autotest_common.sh@953 -- # uname 00:07:18.530 07:21:50 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.530 07:21:50 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63696 00:07:18.530 killing process with pid 63696 00:07:18.530 07:21:51 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.530 07:21:51 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.530 07:21:51 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63696' 00:07:18.530 07:21:51 accel -- common/autotest_common.sh@967 -- # kill 63696 00:07:18.530 07:21:51 accel -- common/autotest_common.sh@972 -- # wait 63696 00:07:18.788 07:21:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:18.788 07:21:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:18.788 07:21:51 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.788 07:21:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.788 07:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.788 07:21:51 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:18.788 07:21:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:18.788 07:21:51 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.788 07:21:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:18.788 07:21:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:18.788 07:21:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.788 07:21:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.788 07:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.788 ************************************ 00:07:18.788 START TEST accel_missing_filename 00:07:18.788 ************************************ 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.788 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:18.788 07:21:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:18.788 [2024-07-25 07:21:51.468708] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:18.788 [2024-07-25 07:21:51.468790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63766 ] 00:07:19.046 [2024-07-25 07:21:51.609670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.046 [2024-07-25 07:21:51.709163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.046 [2024-07-25 07:21:51.752404] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.303 [2024-07-25 07:21:51.812649] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:19.303 A filename is required. 00:07:19.303 ************************************ 00:07:19.303 END TEST accel_missing_filename 00:07:19.303 ************************************ 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.303 00:07:19.303 real 0m0.463s 00:07:19.303 user 0m0.326s 00:07:19.303 sys 0m0.098s 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.303 07:21:51 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:19.303 07:21:51 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.303 07:21:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:19.303 07:21:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.303 07:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.303 ************************************ 00:07:19.303 START TEST accel_compress_verify 00:07:19.304 ************************************ 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.304 07:21:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.304 07:21:51 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.304 [2024-07-25 07:21:51.991636] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:19.304 [2024-07-25 07:21:51.991735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63791 ] 00:07:19.562 [2024-07-25 07:21:52.128159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.562 [2024-07-25 07:21:52.229794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.562 [2024-07-25 07:21:52.272420] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.820 [2024-07-25 07:21:52.333213] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:19.820 00:07:19.820 Compression does not support the verify option, aborting. 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:19.820 ************************************ 00:07:19.820 END TEST accel_compress_verify 00:07:19.820 ************************************ 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.820 00:07:19.820 real 0m0.457s 00:07:19.820 user 0m0.307s 00:07:19.820 sys 0m0.096s 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.820 07:21:52 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.820 07:21:52 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:19.820 07:21:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.820 07:21:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.820 07:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.820 ************************************ 00:07:19.820 START TEST accel_wrong_workload 00:07:19.820 ************************************ 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.820 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:19.820 07:21:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:19.820 Unsupported workload type: foobar 00:07:19.820 [2024-07-25 07:21:52.515049] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:19.820 accel_perf options: 00:07:19.820 [-h help message] 00:07:19.820 [-q queue depth per core] 00:07:19.820 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:19.820 [-T number of threads per core 00:07:19.820 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:19.820 [-t time in seconds] 00:07:19.820 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:19.820 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:19.820 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:19.821 [-l for compress/decompress workloads, name of uncompressed input file 00:07:19.821 [-S for crc32c workload, use this seed value (default 0) 00:07:19.821 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:19.821 [-f for fill workload, use this BYTE value (default 255) 00:07:19.821 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:19.821 [-y verify result if this switch is on] 00:07:19.821 [-a tasks to allocate per core (default: same value as -q)] 00:07:19.821 Can be used to spread operations across a wider range of memory. 00:07:19.821 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:19.821 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.821 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.821 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.821 00:07:19.821 real 0m0.048s 00:07:19.821 user 0m0.025s 00:07:19.821 sys 0m0.021s 00:07:19.821 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.821 07:21:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:19.821 ************************************ 00:07:19.821 END TEST accel_wrong_workload 00:07:19.821 ************************************ 00:07:20.106 07:21:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.106 07:21:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:20.106 07:21:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.106 07:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.106 ************************************ 00:07:20.106 START TEST accel_negative_buffers 00:07:20.106 ************************************ 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:20.106 07:21:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:20.106 -x option must be non-negative. 00:07:20.106 [2024-07-25 07:21:52.615893] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:20.106 accel_perf options: 00:07:20.106 [-h help message] 00:07:20.106 [-q queue depth per core] 00:07:20.106 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.106 [-T number of threads per core 00:07:20.106 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.106 [-t time in seconds] 00:07:20.106 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.106 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:20.106 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.106 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.106 [-S for crc32c workload, use this seed value (default 0) 00:07:20.106 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.106 [-f for fill workload, use this BYTE value (default 255) 00:07:20.106 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.106 [-y verify result if this switch is on] 00:07:20.106 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.106 Can be used to spread operations across a wider range of memory. 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.106 00:07:20.106 real 0m0.044s 00:07:20.106 user 0m0.028s 00:07:20.106 sys 0m0.014s 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.106 07:21:52 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:20.106 ************************************ 00:07:20.106 END TEST accel_negative_buffers 00:07:20.106 ************************************ 00:07:20.106 07:21:52 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:20.106 07:21:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:20.106 07:21:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.106 07:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.106 ************************************ 00:07:20.106 START TEST accel_crc32c 00:07:20.106 ************************************ 00:07:20.106 07:21:52 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:20.106 07:21:52 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:20.106 [2024-07-25 07:21:52.717257] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:20.106 [2024-07-25 07:21:52.717421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63849 ] 00:07:20.381 [2024-07-25 07:21:52.858804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.381 [2024-07-25 07:21:52.965373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.381 07:21:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.382 07:21:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.382 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.382 07:21:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:21.754 07:21:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.754 00:07:21.754 real 0m1.469s 00:07:21.754 user 0m0.017s 00:07:21.754 sys 0m0.004s 00:07:21.754 07:21:54 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.754 07:21:54 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:21.754 ************************************ 00:07:21.754 END TEST accel_crc32c 00:07:21.754 ************************************ 00:07:21.754 07:21:54 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:21.754 07:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:21.754 07:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.754 07:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.754 ************************************ 00:07:21.754 START TEST accel_crc32c_C2 00:07:21.755 ************************************ 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.755 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:21.755 [2024-07-25 07:21:54.240818] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:21.755 [2024-07-25 07:21:54.240906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63884 ] 00:07:21.755 [2024-07-25 07:21:54.375050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.755 [2024-07-25 07:21:54.479377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.013 07:21:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.951 00:07:22.951 real 0m1.462s 00:07:22.951 user 0m1.270s 00:07:22.951 sys 0m0.107s 00:07:22.951 ************************************ 00:07:22.951 END TEST accel_crc32c_C2 00:07:22.951 ************************************ 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.951 07:21:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:23.210 07:21:55 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:23.211 07:21:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:23.211 07:21:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.211 07:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.211 ************************************ 00:07:23.211 START TEST accel_copy 00:07:23.211 ************************************ 00:07:23.211 07:21:55 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:23.211 07:21:55 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:23.211 [2024-07-25 07:21:55.761916] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:23.211 [2024-07-25 07:21:55.762008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63918 ] 00:07:23.211 [2024-07-25 07:21:55.901147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.470 [2024-07-25 07:21:56.004729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.470 07:21:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 ************************************ 00:07:24.849 END TEST accel_copy 00:07:24.849 ************************************ 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:24.849 07:21:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.849 00:07:24.849 real 0m1.464s 00:07:24.849 user 0m1.276s 00:07:24.849 sys 0m0.098s 00:07:24.849 07:21:57 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.849 07:21:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:24.849 07:21:57 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.849 07:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:24.849 07:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.849 07:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.849 ************************************ 00:07:24.849 START TEST accel_fill 00:07:24.849 ************************************ 00:07:24.849 07:21:57 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:24.849 [2024-07-25 07:21:57.286486] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:24.849 [2024-07-25 07:21:57.286685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63953 ] 00:07:24.849 [2024-07-25 07:21:57.425974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.849 [2024-07-25 07:21:57.532663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.850 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 07:21:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.050 ************************************ 00:07:26.050 END TEST accel_fill 00:07:26.050 ************************************ 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:26.050 07:21:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.050 00:07:26.050 real 0m1.468s 00:07:26.050 user 0m1.277s 00:07:26.050 sys 0m0.092s 00:07:26.050 07:21:58 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.050 07:21:58 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:26.050 07:21:58 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:26.050 07:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:26.050 07:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.050 07:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.050 ************************************ 00:07:26.050 START TEST accel_copy_crc32c 00:07:26.050 ************************************ 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:26.050 07:21:58 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:26.310 [2024-07-25 07:21:58.810257] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:26.310 [2024-07-25 07:21:58.810340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63987 ] 00:07:26.310 [2024-07-25 07:21:58.933714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.568 [2024-07-25 07:21:59.059583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.569 07:21:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.948 00:07:27.948 real 0m1.472s 00:07:27.948 user 0m1.276s 00:07:27.948 sys 0m0.099s 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.948 07:22:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:27.948 ************************************ 00:07:27.948 END TEST accel_copy_crc32c 00:07:27.948 ************************************ 00:07:27.948 07:22:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:27.948 07:22:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:27.948 07:22:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.948 07:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.948 ************************************ 00:07:27.948 START TEST accel_copy_crc32c_C2 00:07:27.948 ************************************ 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:27.948 [2024-07-25 07:22:00.336494] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:27.948 [2024-07-25 07:22:00.336586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64022 ] 00:07:27.948 [2024-07-25 07:22:00.477484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.948 [2024-07-25 07:22:00.572572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.948 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 07:22:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 ************************************ 00:07:29.324 END TEST accel_copy_crc32c_C2 00:07:29.324 ************************************ 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.324 00:07:29.324 real 0m1.450s 00:07:29.324 user 0m1.263s 00:07:29.324 sys 0m0.100s 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.324 07:22:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:29.324 07:22:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:29.324 07:22:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.324 07:22:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.324 07:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.324 ************************************ 00:07:29.324 START TEST accel_dualcast 00:07:29.324 ************************************ 00:07:29.324 07:22:01 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:29.324 07:22:01 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:29.324 [2024-07-25 07:22:01.835623] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:29.324 [2024-07-25 07:22:01.835755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64051 ] 00:07:29.324 [2024-07-25 07:22:01.977902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.583 [2024-07-25 07:22:02.079097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.583 07:22:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.526 07:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.526 07:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.526 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.526 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.526 07:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:30.784 07:22:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.784 00:07:30.784 real 0m1.452s 00:07:30.784 user 0m1.266s 00:07:30.784 sys 0m0.099s 00:07:30.784 07:22:03 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.784 07:22:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:30.784 ************************************ 00:07:30.784 END TEST accel_dualcast 00:07:30.784 ************************************ 00:07:30.784 07:22:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:30.784 07:22:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:30.784 07:22:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.784 07:22:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.784 ************************************ 00:07:30.784 START TEST accel_compare 00:07:30.784 ************************************ 00:07:30.784 07:22:03 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:30.784 07:22:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:30.784 [2024-07-25 07:22:03.365185] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:30.784 [2024-07-25 07:22:03.365280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64091 ] 00:07:30.784 [2024-07-25 07:22:03.505490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.043 [2024-07-25 07:22:03.604564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.043 07:22:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.423 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:32.424 07:22:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.424 00:07:32.424 real 0m1.459s 00:07:32.424 user 0m1.276s 00:07:32.424 sys 0m0.095s 00:07:32.424 07:22:04 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.424 07:22:04 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:32.424 ************************************ 00:07:32.424 END TEST accel_compare 00:07:32.424 ************************************ 00:07:32.424 07:22:04 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:32.424 07:22:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:32.424 07:22:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.424 07:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.424 ************************************ 00:07:32.424 START TEST accel_xor 00:07:32.424 ************************************ 00:07:32.424 07:22:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:32.424 07:22:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:32.424 [2024-07-25 07:22:04.887125] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:32.424 [2024-07-25 07:22:04.887302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64120 ] 00:07:32.424 [2024-07-25 07:22:05.026616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.424 [2024-07-25 07:22:05.132006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.683 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.684 07:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:33.622 07:22:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.622 00:07:33.622 real 0m1.476s 00:07:33.622 user 0m1.275s 00:07:33.622 sys 0m0.110s 00:07:33.622 07:22:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.622 07:22:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:33.622 ************************************ 00:07:33.622 END TEST accel_xor 00:07:33.622 ************************************ 00:07:33.882 07:22:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:33.882 07:22:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:33.882 07:22:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.882 07:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.882 ************************************ 00:07:33.882 START TEST accel_xor 00:07:33.882 ************************************ 00:07:33.882 07:22:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:33.882 07:22:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:33.882 [2024-07-25 07:22:06.420406] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:33.882 [2024-07-25 07:22:06.421161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64160 ] 00:07:33.882 [2024-07-25 07:22:06.558510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.142 [2024-07-25 07:22:06.659429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.142 07:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.536 ************************************ 00:07:35.536 END TEST accel_xor 00:07:35.536 ************************************ 00:07:35.536 07:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.537 07:22:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:35.537 07:22:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.537 00:07:35.537 real 0m1.465s 00:07:35.537 user 0m1.279s 00:07:35.537 sys 0m0.099s 00:07:35.537 07:22:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.537 07:22:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:35.537 07:22:07 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:35.537 07:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:35.537 07:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.537 07:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.537 ************************************ 00:07:35.537 START TEST accel_dif_verify 00:07:35.537 ************************************ 00:07:35.537 07:22:07 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:35.537 07:22:07 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:35.537 [2024-07-25 07:22:07.947205] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:35.537 [2024-07-25 07:22:07.947293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64189 ] 00:07:35.537 [2024-07-25 07:22:08.086968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.537 [2024-07-25 07:22:08.192690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.537 07:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.918 07:22:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:36.919 07:22:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.919 00:07:36.919 real 0m1.474s 00:07:36.919 user 0m1.289s 00:07:36.919 sys 0m0.097s 00:07:36.919 07:22:09 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.919 07:22:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.919 ************************************ 00:07:36.919 END TEST accel_dif_verify 00:07:36.919 ************************************ 00:07:36.919 07:22:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:36.919 07:22:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:36.919 07:22:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.919 07:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.919 ************************************ 00:07:36.919 START TEST accel_dif_generate 00:07:36.919 ************************************ 00:07:36.919 07:22:09 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:36.919 07:22:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:36.919 [2024-07-25 07:22:09.482424] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:36.919 [2024-07-25 07:22:09.482532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64229 ] 00:07:36.919 [2024-07-25 07:22:09.622197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.177 [2024-07-25 07:22:09.720308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.178 07:22:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:38.558 07:22:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.558 00:07:38.558 real 0m1.453s 00:07:38.558 user 0m1.268s 00:07:38.558 sys 0m0.099s 00:07:38.558 07:22:10 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.559 07:22:10 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:38.559 ************************************ 00:07:38.559 END TEST accel_dif_generate 00:07:38.559 ************************************ 00:07:38.559 07:22:10 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.559 07:22:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:38.559 07:22:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.559 07:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.559 ************************************ 00:07:38.559 START TEST accel_dif_generate_copy 00:07:38.559 ************************************ 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:38.559 07:22:10 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:38.559 [2024-07-25 07:22:10.996769] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:38.559 [2024-07-25 07:22:10.996916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64258 ] 00:07:38.559 [2024-07-25 07:22:11.135662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.559 [2024-07-25 07:22:11.229812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.559 07:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.008 00:07:40.008 real 0m1.450s 00:07:40.008 user 0m1.264s 00:07:40.008 sys 0m0.091s 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.008 07:22:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:40.008 ************************************ 00:07:40.008 END TEST accel_dif_generate_copy 00:07:40.008 ************************************ 00:07:40.008 07:22:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:40.008 07:22:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.008 07:22:12 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:40.008 07:22:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.008 07:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.008 ************************************ 00:07:40.008 START TEST accel_comp 00:07:40.008 ************************************ 00:07:40.008 07:22:12 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.008 07:22:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:40.009 07:22:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:40.009 [2024-07-25 07:22:12.509137] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:40.009 [2024-07-25 07:22:12.509224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64296 ] 00:07:40.009 [2024-07-25 07:22:12.642738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.009 [2024-07-25 07:22:12.733249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.269 07:22:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 07:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.223 ************************************ 00:07:41.223 END TEST accel_comp 00:07:41.223 ************************************ 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:41.223 07:22:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.223 00:07:41.223 real 0m1.441s 00:07:41.223 user 0m1.265s 00:07:41.223 sys 0m0.090s 00:07:41.223 07:22:13 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.223 07:22:13 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:41.482 07:22:13 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.482 07:22:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:41.482 07:22:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.482 07:22:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.482 ************************************ 00:07:41.482 START TEST accel_decomp 00:07:41.482 ************************************ 00:07:41.482 07:22:13 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.482 07:22:13 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.482 07:22:13 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:41.482 07:22:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.482 07:22:13 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.482 07:22:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.483 07:22:13 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:41.483 [2024-07-25 07:22:14.006696] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:41.483 [2024-07-25 07:22:14.006781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64329 ] 00:07:41.483 [2024-07-25 07:22:14.145227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.743 [2024-07-25 07:22:14.239850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.743 07:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.123 07:22:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.123 00:07:43.123 real 0m1.453s 00:07:43.123 user 0m1.265s 00:07:43.123 sys 0m0.103s 00:07:43.123 07:22:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.123 07:22:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:43.123 ************************************ 00:07:43.123 END TEST accel_decomp 00:07:43.123 ************************************ 00:07:43.123 07:22:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:43.123 07:22:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:43.123 07:22:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.123 07:22:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.123 ************************************ 00:07:43.123 START TEST accel_decomp_full 00:07:43.123 ************************************ 00:07:43.123 07:22:15 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:43.123 07:22:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:43.123 [2024-07-25 07:22:15.521814] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:43.124 [2024-07-25 07:22:15.521892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64359 ] 00:07:43.124 [2024-07-25 07:22:15.662863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.124 [2024-07-25 07:22:15.762532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.124 07:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.500 07:22:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.500 00:07:44.501 real 0m1.462s 00:07:44.501 user 0m0.022s 00:07:44.501 sys 0m0.001s 00:07:44.501 07:22:16 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.501 07:22:16 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:44.501 ************************************ 00:07:44.501 END TEST accel_decomp_full 00:07:44.501 ************************************ 00:07:44.501 07:22:16 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.501 07:22:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:44.501 07:22:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.501 07:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.501 ************************************ 00:07:44.501 START TEST accel_decomp_mcore 00:07:44.501 ************************************ 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:44.501 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:44.501 [2024-07-25 07:22:17.034555] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:44.501 [2024-07-25 07:22:17.034641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64398 ] 00:07:44.501 [2024-07-25 07:22:17.174583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.759 [2024-07-25 07:22:17.277758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.759 [2024-07-25 07:22:17.278023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.759 [2024-07-25 07:22:17.277844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.759 [2024-07-25 07:22:17.278026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.759 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.760 07:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.137 00:07:46.137 real 0m1.471s 00:07:46.137 user 0m4.575s 00:07:46.137 sys 0m0.111s 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.137 07:22:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.137 ************************************ 00:07:46.137 END TEST accel_decomp_mcore 00:07:46.137 ************************************ 00:07:46.137 07:22:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.137 07:22:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:46.137 07:22:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.137 07:22:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.137 ************************************ 00:07:46.137 START TEST accel_decomp_full_mcore 00:07:46.137 ************************************ 00:07:46.137 07:22:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.137 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:46.137 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:46.137 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.137 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.137 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.137 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:46.138 [2024-07-25 07:22:18.571873] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:46.138 [2024-07-25 07:22:18.572019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64430 ] 00:07:46.138 [2024-07-25 07:22:18.713312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.138 [2024-07-25 07:22:18.814651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.138 [2024-07-25 07:22:18.814839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.138 [2024-07-25 07:22:18.815007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.138 [2024-07-25 07:22:18.815013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.138 07:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.517 00:07:47.517 real 0m1.486s 00:07:47.517 user 0m4.612s 00:07:47.517 sys 0m0.116s 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.517 07:22:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:47.517 ************************************ 00:07:47.517 END TEST accel_decomp_full_mcore 00:07:47.517 ************************************ 00:07:47.517 07:22:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.517 07:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:47.517 07:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.517 07:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.517 ************************************ 00:07:47.517 START TEST accel_decomp_mthread 00:07:47.517 ************************************ 00:07:47.517 07:22:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.517 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:47.517 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:47.517 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:47.518 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:47.518 [2024-07-25 07:22:20.114041] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:47.518 [2024-07-25 07:22:20.114218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64475 ] 00:07:47.778 [2024-07-25 07:22:20.251855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.778 [2024-07-25 07:22:20.348920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.778 07:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.156 ************************************ 00:07:49.156 END TEST accel_decomp_mthread 00:07:49.156 ************************************ 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.156 00:07:49.156 real 0m1.457s 00:07:49.156 user 0m1.276s 00:07:49.156 sys 0m0.095s 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.156 07:22:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:49.156 07:22:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.156 07:22:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:49.156 07:22:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.156 07:22:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.156 ************************************ 00:07:49.156 START TEST accel_decomp_full_mthread 00:07:49.156 ************************************ 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:49.156 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:49.156 [2024-07-25 07:22:21.629206] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:49.156 [2024-07-25 07:22:21.629346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64504 ] 00:07:49.156 [2024-07-25 07:22:21.770296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.156 [2024-07-25 07:22:21.868417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.416 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.417 07:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.362 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.633 00:07:50.633 real 0m1.500s 00:07:50.633 user 0m1.319s 00:07:50.633 sys 0m0.093s 00:07:50.633 ************************************ 00:07:50.633 END TEST accel_decomp_full_mthread 00:07:50.633 ************************************ 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.633 07:22:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:50.633 07:22:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:50.633 07:22:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:50.633 07:22:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:50.633 07:22:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.633 07:22:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.633 07:22:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:50.633 07:22:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.633 07:22:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.633 07:22:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.633 07:22:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.633 07:22:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:50.633 07:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.633 07:22:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:50.633 ************************************ 00:07:50.633 START TEST accel_dif_functional_tests 00:07:50.633 ************************************ 00:07:50.633 07:22:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:50.633 [2024-07-25 07:22:23.208019] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:50.633 [2024-07-25 07:22:23.208175] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64546 ] 00:07:50.633 [2024-07-25 07:22:23.346471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.892 [2024-07-25 07:22:23.448825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.892 [2024-07-25 07:22:23.448884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.892 [2024-07-25 07:22:23.448886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.892 00:07:50.892 00:07:50.892 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.892 http://cunit.sourceforge.net/ 00:07:50.892 00:07:50.892 00:07:50.892 Suite: accel_dif 00:07:50.892 Test: verify: DIF generated, GUARD check ...passed 00:07:50.892 Test: verify: DIF generated, APPTAG check ...passed 00:07:50.892 Test: verify: DIF generated, REFTAG check ...passed 00:07:50.892 Test: verify: DIF not generated, GUARD check ...passed 00:07:50.892 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 07:22:23.521155] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.892 [2024-07-25 07:22:23.521222] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.892 passed 00:07:50.892 Test: verify: DIF not generated, REFTAG check ...passed 00:07:50.892 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:50.892 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:50.892 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:50.892 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:50.892 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-25 07:22:23.521261] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.892 [2024-07-25 07:22:23.521337] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:50.892 passed 00:07:50.892 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:50.892 Test: verify copy: DIF generated, GUARD check ...[2024-07-25 07:22:23.521483] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:50.892 passed 00:07:50.892 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:50.892 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:50.892 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 07:22:23.521675] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.892 passed 00:07:50.892 Test: verify copy: DIF not generated, APPTAG check ...passed 00:07:50.892 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 07:22:23.521703] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.892 [2024-07-25 07:22:23.521725] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed 00:07:50.892 Test: generate copy: DIF generated, GUARD check ...5a 00:07:50.892 passed 00:07:50.892 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:50.892 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:50.892 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:50.892 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:50.892 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:50.892 Test: generate copy: iovecs-len validate ...[2024-07-25 07:22:23.521983] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:50.892 passed 00:07:50.892 Test: generate copy: buffer alignment validate ...passed 00:07:50.892 00:07:50.892 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.892 suites 1 1 n/a 0 0 00:07:50.892 tests 26 26 26 0 0 00:07:50.892 asserts 115 115 115 0 n/a 00:07:50.892 00:07:50.892 Elapsed time = 0.002 seconds 00:07:51.151 00:07:51.151 real 0m0.551s 00:07:51.151 user 0m0.686s 00:07:51.151 sys 0m0.128s 00:07:51.151 ************************************ 00:07:51.151 END TEST accel_dif_functional_tests 00:07:51.151 ************************************ 00:07:51.151 07:22:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.151 07:22:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:51.151 ************************************ 00:07:51.151 END TEST accel 00:07:51.151 ************************************ 00:07:51.151 00:07:51.151 real 0m33.909s 00:07:51.151 user 0m35.642s 00:07:51.151 sys 0m3.818s 00:07:51.151 07:22:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.151 07:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.151 07:22:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:51.151 07:22:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.151 07:22:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.151 07:22:23 -- common/autotest_common.sh@10 -- # set +x 00:07:51.151 ************************************ 00:07:51.151 START TEST accel_rpc 00:07:51.151 ************************************ 00:07:51.151 07:22:23 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:51.410 * Looking for test storage... 00:07:51.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:51.410 07:22:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:51.410 07:22:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64612 00:07:51.410 07:22:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:51.410 07:22:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64612 00:07:51.410 07:22:23 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64612 ']' 00:07:51.410 07:22:23 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.410 07:22:23 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.410 07:22:23 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.410 07:22:23 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.410 07:22:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.411 [2024-07-25 07:22:23.989697] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:51.411 [2024-07-25 07:22:23.989846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64612 ] 00:07:51.411 [2024-07-25 07:22:24.128023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.669 [2024-07-25 07:22:24.224996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.238 07:22:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.238 07:22:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.238 07:22:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:52.238 07:22:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:52.238 07:22:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:52.238 07:22:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:52.238 07:22:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:52.238 07:22:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.238 07:22:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.238 07:22:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.238 ************************************ 00:07:52.238 START TEST accel_assign_opcode 00:07:52.238 ************************************ 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.238 [2024-07-25 07:22:24.876246] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.238 [2024-07-25 07:22:24.888219] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.238 07:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.498 software 00:07:52.498 ************************************ 00:07:52.498 END TEST accel_assign_opcode 00:07:52.498 ************************************ 00:07:52.498 00:07:52.498 real 0m0.251s 00:07:52.498 user 0m0.042s 00:07:52.498 sys 0m0.015s 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.498 07:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.498 07:22:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64612 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64612 ']' 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64612 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64612 00:07:52.498 killing process with pid 64612 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64612' 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 64612 00:07:52.498 07:22:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 64612 00:07:53.067 00:07:53.067 real 0m1.712s 00:07:53.067 user 0m1.737s 00:07:53.067 sys 0m0.430s 00:07:53.067 07:22:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.067 ************************************ 00:07:53.067 END TEST accel_rpc 00:07:53.067 ************************************ 00:07:53.067 07:22:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.067 07:22:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:53.067 07:22:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.067 07:22:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.067 07:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:53.067 ************************************ 00:07:53.067 START TEST app_cmdline 00:07:53.067 ************************************ 00:07:53.067 07:22:25 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:53.067 * Looking for test storage... 00:07:53.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.067 07:22:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:53.067 07:22:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64717 00:07:53.067 07:22:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:53.067 07:22:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64717 00:07:53.067 07:22:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64717 ']' 00:07:53.067 07:22:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.067 07:22:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.067 07:22:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.067 07:22:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.067 07:22:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.067 [2024-07-25 07:22:25.747060] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:53.067 [2024-07-25 07:22:25.747142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64717 ] 00:07:53.326 [2024-07-25 07:22:25.883459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.326 [2024-07-25 07:22:26.005513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.264 07:22:26 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.264 07:22:26 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:54.264 { 00:07:54.264 "fields": { 00:07:54.264 "commit": "c0d54772e", 00:07:54.264 "major": 24, 00:07:54.264 "minor": 9, 00:07:54.264 "patch": 0, 00:07:54.264 "suffix": "-pre" 00:07:54.264 }, 00:07:54.264 "version": "SPDK v24.09-pre git sha1 c0d54772e" 00:07:54.264 } 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:54.264 07:22:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:54.264 07:22:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.264 07:22:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.265 07:22:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:54.265 07:22:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:54.265 07:22:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:54.265 07:22:26 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.523 2024/07/25 07:22:27 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:54.523 request: 00:07:54.523 { 00:07:54.523 "method": "env_dpdk_get_mem_stats", 00:07:54.523 "params": {} 00:07:54.523 } 00:07:54.523 Got JSON-RPC error response 00:07:54.523 GoRPCClient: error on JSON-RPC call 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:54.523 07:22:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64717 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64717 ']' 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64717 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64717 00:07:54.523 killing process with pid 64717 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64717' 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@967 -- # kill 64717 00:07:54.523 07:22:27 app_cmdline -- common/autotest_common.sh@972 -- # wait 64717 00:07:55.093 00:07:55.093 real 0m1.963s 00:07:55.093 user 0m2.407s 00:07:55.093 sys 0m0.456s 00:07:55.093 07:22:27 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.093 07:22:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.093 ************************************ 00:07:55.093 END TEST app_cmdline 00:07:55.093 ************************************ 00:07:55.093 07:22:27 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.093 07:22:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.093 07:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.093 07:22:27 -- common/autotest_common.sh@10 -- # set +x 00:07:55.093 ************************************ 00:07:55.093 START TEST version 00:07:55.093 ************************************ 00:07:55.093 07:22:27 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.093 * Looking for test storage... 00:07:55.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:55.093 07:22:27 version -- app/version.sh@17 -- # get_header_version major 00:07:55.093 07:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # cut -f2 00:07:55.093 07:22:27 version -- app/version.sh@17 -- # major=24 00:07:55.093 07:22:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:55.093 07:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # cut -f2 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.093 07:22:27 version -- app/version.sh@18 -- # minor=9 00:07:55.093 07:22:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:55.093 07:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # cut -f2 00:07:55.093 07:22:27 version -- app/version.sh@19 -- # patch=0 00:07:55.093 07:22:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:55.093 07:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # cut -f2 00:07:55.093 07:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.093 07:22:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:55.093 07:22:27 version -- app/version.sh@22 -- # version=24.9 00:07:55.093 07:22:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:55.093 07:22:27 version -- app/version.sh@28 -- # version=24.9rc0 00:07:55.093 07:22:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:55.093 07:22:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:55.093 07:22:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:55.093 07:22:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:55.093 00:07:55.093 real 0m0.192s 00:07:55.093 user 0m0.099s 00:07:55.093 sys 0m0.135s 00:07:55.093 07:22:27 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.093 07:22:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:55.093 ************************************ 00:07:55.093 END TEST version 00:07:55.093 ************************************ 00:07:55.353 07:22:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@198 -- # uname -s 00:07:55.353 07:22:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:55.353 07:22:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:55.353 07:22:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:55.353 07:22:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:55.353 07:22:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.353 07:22:27 -- common/autotest_common.sh@10 -- # set +x 00:07:55.353 07:22:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:55.353 07:22:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:55.353 07:22:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:55.353 07:22:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.353 07:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.353 07:22:27 -- common/autotest_common.sh@10 -- # set +x 00:07:55.353 ************************************ 00:07:55.353 START TEST nvmf_tcp 00:07:55.353 ************************************ 00:07:55.353 07:22:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:55.353 * Looking for test storage... 00:07:55.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:55.353 07:22:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:55.353 07:22:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:55.353 07:22:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:55.353 07:22:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.353 07:22:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.353 07:22:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.353 ************************************ 00:07:55.353 START TEST nvmf_target_core 00:07:55.353 ************************************ 00:07:55.353 07:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:55.613 * Looking for test storage... 00:07:55.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.613 07:22:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.614 ************************************ 00:07:55.614 START TEST nvmf_abort 00:07:55.614 ************************************ 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:55.614 * Looking for test storage... 00:07:55.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:55.614 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.873 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:55.874 Cannot find device "nvmf_init_br" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:55.874 Cannot find device "nvmf_tgt_br" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.874 Cannot find device "nvmf_tgt_br2" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:55.874 Cannot find device "nvmf_init_br" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:55.874 Cannot find device "nvmf_tgt_br" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:55.874 Cannot find device "nvmf_tgt_br2" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:55.874 Cannot find device "nvmf_br" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:55.874 Cannot find device "nvmf_init_if" 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:55.874 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:56.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:07:56.134 00:07:56.134 --- 10.0.0.2 ping statistics --- 00:07:56.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.134 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:56.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:56.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:07:56.134 00:07:56.134 --- 10.0.0.3 ping statistics --- 00:07:56.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.134 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:56.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:07:56.134 00:07:56.134 --- 10.0.0.1 ping statistics --- 00:07:56.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.134 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=65086 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 65086 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 65086 ']' 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.134 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.134 [2024-07-25 07:22:28.862766] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:07:56.134 [2024-07-25 07:22:28.862844] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.395 [2024-07-25 07:22:29.004304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.395 [2024-07-25 07:22:29.110530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.395 [2024-07-25 07:22:29.110581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.395 [2024-07-25 07:22:29.110589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.395 [2024-07-25 07:22:29.110595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.395 [2024-07-25 07:22:29.110600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.395 [2024-07-25 07:22:29.110715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.395 [2024-07-25 07:22:29.110896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.395 [2024-07-25 07:22:29.111042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 [2024-07-25 07:22:29.833878] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 Malloc0 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 Delay0 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 [2024-07-25 07:22:29.922583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.331 07:22:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:57.590 [2024-07-25 07:22:30.119871] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:59.491 Initializing NVMe Controllers 00:07:59.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:59.491 controller IO queue size 128 less than required 00:07:59.491 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:59.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:59.491 Initialization complete. Launching workers. 00:07:59.491 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38840 00:07:59.491 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38901, failed to submit 62 00:07:59.491 success 38844, unsuccess 57, failed 0 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.491 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.491 rmmod nvme_tcp 00:07:59.751 rmmod nvme_fabrics 00:07:59.751 rmmod nvme_keyring 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 65086 ']' 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 65086 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 65086 ']' 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 65086 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65086 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:59.751 killing process with pid 65086 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65086' 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 65086 00:07:59.751 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 65086 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:00.011 00:08:00.011 real 0m4.365s 00:08:00.011 user 0m12.282s 00:08:00.011 sys 0m0.971s 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.011 ************************************ 00:08:00.011 END TEST nvmf_abort 00:08:00.011 ************************************ 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.011 ************************************ 00:08:00.011 START TEST nvmf_ns_hotplug_stress 00:08:00.011 ************************************ 00:08:00.011 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:00.270 * Looking for test storage... 00:08:00.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.270 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:00.271 Cannot find device "nvmf_tgt_br" 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.271 Cannot find device "nvmf_tgt_br2" 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:00.271 Cannot find device "nvmf_tgt_br" 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:00.271 Cannot find device "nvmf_tgt_br2" 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.271 07:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.271 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:00.529 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:00.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:08:00.530 00:08:00.530 --- 10.0.0.2 ping statistics --- 00:08:00.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.530 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:00.530 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:00.530 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:00.530 00:08:00.530 --- 10.0.0.3 ping statistics --- 00:08:00.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.530 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:00.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:08:00.530 00:08:00.530 --- 10.0.0.1 ping statistics --- 00:08:00.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.530 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=65358 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 65358 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 65358 ']' 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.530 07:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.530 [2024-07-25 07:22:33.217937] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:00.530 [2024-07-25 07:22:33.218057] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.788 [2024-07-25 07:22:33.360331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.788 [2024-07-25 07:22:33.463280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.788 [2024-07-25 07:22:33.463336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.788 [2024-07-25 07:22:33.463346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.788 [2024-07-25 07:22:33.463352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.788 [2024-07-25 07:22:33.463359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.788 [2024-07-25 07:22:33.463574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.788 [2024-07-25 07:22:33.464518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.788 [2024-07-25 07:22:33.464519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.724 [2024-07-25 07:22:34.390249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.724 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:01.982 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.242 [2024-07-25 07:22:34.854828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.242 07:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.500 07:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:02.757 Malloc0 00:08:02.757 07:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:03.015 Delay0 00:08:03.015 07:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.272 07:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:03.272 NULL1 00:08:03.530 07:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:03.530 07:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=65490 00:08:03.530 07:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:03.530 07:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:03.530 07:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.904 Read completed with error (sct=0, sc=11) 00:08:04.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.904 07:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.164 07:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:05.164 07:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:05.164 true 00:08:05.164 07:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:05.164 07:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.100 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.358 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:06.358 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:06.617 true 00:08:06.617 07:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:06.617 07:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.876 07:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.135 07:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:07.135 07:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:07.401 true 00:08:07.401 07:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:07.401 07:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.969 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.228 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:08.228 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:08.487 true 00:08:08.487 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:08.487 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.744 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.002 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:09.002 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:09.260 true 00:08:09.260 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:09.260 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.197 07:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.197 07:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:10.197 07:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:10.456 true 00:08:10.456 07:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:10.456 07:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.715 07:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.973 07:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:10.973 07:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:11.232 true 00:08:11.232 07:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:11.232 07:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.167 07:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.426 07:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:12.426 07:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:12.426 true 00:08:12.426 07:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:12.426 07:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.687 07:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.945 07:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:12.945 07:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:13.204 true 00:08:13.204 07:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:13.204 07:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.142 07:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.401 07:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:14.401 07:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:14.401 true 00:08:14.659 07:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:14.659 07:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.659 07:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.918 07:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:14.918 07:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:15.176 true 00:08:15.176 07:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:15.176 07:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.113 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.372 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:16.372 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:16.631 true 00:08:16.631 07:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:16.631 07:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.890 07:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.890 07:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:16.890 07:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:17.148 true 00:08:17.148 07:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:17.148 07:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.086 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.346 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:18.346 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:18.605 true 00:08:18.605 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:18.605 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.865 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.124 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:19.124 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:19.382 true 00:08:19.382 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:19.382 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.320 07:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.320 07:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:20.320 07:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:20.580 true 00:08:20.580 07:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:20.580 07:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.853 07:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.112 07:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:21.112 07:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:21.372 true 00:08:21.372 07:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:21.372 07:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.310 07:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.310 07:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:22.310 07:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:22.621 true 00:08:22.621 07:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:22.621 07:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.880 07:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.139 07:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:23.139 07:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:23.139 true 00:08:23.139 07:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:23.139 07:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.076 07:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.334 07:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:24.334 07:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:24.593 true 00:08:24.593 07:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:24.593 07:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.852 07:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.110 07:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:25.110 07:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:25.110 true 00:08:25.110 07:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:25.110 07:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.047 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.306 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:26.306 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:26.565 true 00:08:26.565 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:26.565 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.824 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.824 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:26.824 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:27.083 true 00:08:27.083 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:27.083 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.298 07:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.298 07:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:28.298 07:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:28.557 true 00:08:28.557 07:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:28.557 07:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.494 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.752 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:29.752 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:29.752 true 00:08:29.752 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:29.752 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.011 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.270 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:30.270 07:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:30.528 true 00:08:30.528 07:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:30.528 07:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.517 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.776 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:31.776 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:31.776 true 00:08:31.776 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:31.776 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.036 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.295 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:32.296 07:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:32.555 true 00:08:32.555 07:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:32.555 07:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.494 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.757 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:33.757 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:33.757 Initializing NVMe Controllers 00:08:33.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.757 Controller IO queue size 128, less than required. 00:08:33.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.757 Controller IO queue size 128, less than required. 00:08:33.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:33.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:33.757 Initialization complete. Launching workers. 00:08:33.757 ======================================================== 00:08:33.757 Latency(us) 00:08:33.757 Device Information : IOPS MiB/s Average min max 00:08:33.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 539.05 0.26 137306.16 3025.71 1148644.36 00:08:33.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13720.48 6.70 9329.11 2916.17 692025.26 00:08:33.757 ======================================================== 00:08:33.757 Total : 14259.53 6.96 14166.98 2916.17 1148644.36 00:08:33.757 00:08:33.757 true 00:08:33.757 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 65490 00:08:33.757 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (65490) - No such process 00:08:33.757 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 65490 00:08:33.757 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.017 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.276 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:34.276 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:34.276 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:34.276 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.276 07:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:34.542 null0 00:08:34.542 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.542 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.542 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:34.815 null1 00:08:34.816 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.816 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.816 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:34.816 null2 00:08:34.816 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.816 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.816 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:35.075 null3 00:08:35.075 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.075 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.075 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:35.335 null4 00:08:35.335 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.335 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.335 07:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:35.595 null5 00:08:35.595 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.595 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.595 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:35.854 null6 00:08:35.854 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.854 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.854 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:36.114 null7 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.114 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.115 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.115 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.115 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 66511 66512 66514 66516 66518 66519 66521 66523 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.374 07:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.633 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.634 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.634 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.634 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.892 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.151 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.411 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.411 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.411 07:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.411 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.671 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.930 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.930 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.930 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.930 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.931 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.190 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.448 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.448 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.449 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.449 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.707 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.707 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.707 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.707 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.708 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.965 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.966 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.966 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.966 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.966 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.966 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.223 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.482 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.482 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.482 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.482 07:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.482 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.740 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.998 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.257 07:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.515 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.778 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.779 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.779 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.044 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.303 rmmod nvme_tcp 00:08:41.303 rmmod nvme_fabrics 00:08:41.303 rmmod nvme_keyring 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 65358 ']' 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 65358 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 65358 ']' 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 65358 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65358 00:08:41.303 killing process with pid 65358 00:08:41.303 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:41.304 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:41.304 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65358' 00:08:41.304 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 65358 00:08:41.304 07:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 65358 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:41.563 ************************************ 00:08:41.563 END TEST nvmf_ns_hotplug_stress 00:08:41.563 ************************************ 00:08:41.563 00:08:41.563 real 0m41.667s 00:08:41.563 user 3m17.846s 00:08:41.563 sys 0m11.214s 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.563 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.823 ************************************ 00:08:41.823 START TEST nvmf_delete_subsystem 00:08:41.823 ************************************ 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:41.823 * Looking for test storage... 00:08:41.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:41.823 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:42.090 Cannot find device "nvmf_tgt_br" 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.090 Cannot find device "nvmf_tgt_br2" 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:42.090 Cannot find device "nvmf_tgt_br" 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:42.090 Cannot find device "nvmf_tgt_br2" 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.090 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:42.354 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:42.354 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:42.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:08:42.355 00:08:42.355 --- 10.0.0.2 ping statistics --- 00:08:42.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.355 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:42.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:08:42.355 00:08:42.355 --- 10.0.0.3 ping statistics --- 00:08:42.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.355 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:42.355 00:08:42.355 --- 10.0.0.1 ping statistics --- 00:08:42.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.355 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=67878 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 67878 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 67878 ']' 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.355 07:23:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.355 [2024-07-25 07:23:14.967028] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:42.355 [2024-07-25 07:23:14.967124] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.615 [2024-07-25 07:23:15.106895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.615 [2024-07-25 07:23:15.201212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.615 [2024-07-25 07:23:15.201281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.615 [2024-07-25 07:23:15.201289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.615 [2024-07-25 07:23:15.201294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.615 [2024-07-25 07:23:15.201299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.615 [2024-07-25 07:23:15.201551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.615 [2024-07-25 07:23:15.201550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.184 [2024-07-25 07:23:15.869131] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.184 [2024-07-25 07:23:15.893256] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.184 NULL1 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.184 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.443 Delay0 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=67929 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:43.443 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:43.443 [2024-07-25 07:23:16.109615] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:45.348 07:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.348 07:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.348 07:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 [2024-07-25 07:23:18.148161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2249390 is same with the state(5) to be set 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 starting I/O failed: -6 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.607 Write completed with error (sct=0, sc=8) 00:08:45.607 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 starting I/O failed: -6 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Write completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:45.608 Read completed with error (sct=0, sc=8) 00:08:46.551 [2024-07-25 07:23:19.123929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2227510 is same with the state(5) to be set 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 [2024-07-25 07:23:19.145288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6e400d7a0 is same with the state(5) to be set 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 [2024-07-25 07:23:19.145543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22496c0 is same with the state(5) to be set 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Read completed with error (sct=0, sc=8) 00:08:46.551 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 [2024-07-25 07:23:19.145750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224aa80 is same with the state(5) to be set 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Write completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 Read completed with error (sct=0, sc=8) 00:08:46.552 [2024-07-25 07:23:19.146199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6e400d000 is same with the state(5) to be set 00:08:46.552 Initializing NVMe Controllers 00:08:46.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:46.552 Controller IO queue size 128, less than required. 00:08:46.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:46.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:46.552 Initialization complete. Launching workers. 00:08:46.552 ======================================================== 00:08:46.552 Latency(us) 00:08:46.552 Device Information : IOPS MiB/s Average min max 00:08:46.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.25 0.09 883359.52 631.76 1017750.40 00:08:46.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 190.11 0.09 894372.59 651.76 1018941.78 00:08:46.552 ======================================================== 00:08:46.552 Total : 366.36 0.18 889074.41 631.76 1018941.78 00:08:46.552 00:08:46.552 [2024-07-25 07:23:19.147086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227510 (9): Bad file descriptor 00:08:46.552 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:46.552 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.552 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:46.552 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 67929 00:08:46.552 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 67929 00:08:47.137 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (67929) - No such process 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 67929 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 67929 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 67929 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.137 [2024-07-25 07:23:19.684781] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=67975 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:47.137 07:23:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:47.399 [2024-07-25 07:23:19.875746] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:47.659 07:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.659 07:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:47.659 07:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.231 07:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.231 07:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:48.231 07:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.491 07:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.491 07:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:48.491 07:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.061 07:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.061 07:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:49.061 07:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.630 07:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.630 07:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:49.630 07:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.197 07:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.198 07:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:50.198 07:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.198 Initializing NVMe Controllers 00:08:50.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:50.198 Controller IO queue size 128, less than required. 00:08:50.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:50.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:50.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:50.198 Initialization complete. Launching workers. 00:08:50.198 ======================================================== 00:08:50.198 Latency(us) 00:08:50.198 Device Information : IOPS MiB/s Average min max 00:08:50.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003020.25 1000208.86 1007635.66 00:08:50.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004463.48 1000279.48 1011698.08 00:08:50.198 ======================================================== 00:08:50.198 Total : 256.00 0.12 1003741.87 1000208.86 1011698.08 00:08:50.198 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 67975 00:08:50.767 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (67975) - No such process 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 67975 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.767 rmmod nvme_tcp 00:08:50.767 rmmod nvme_fabrics 00:08:50.767 rmmod nvme_keyring 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:50.767 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 67878 ']' 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 67878 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 67878 ']' 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 67878 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67878 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.768 killing process with pid 67878 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67878' 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 67878 00:08:50.768 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 67878 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:51.026 00:08:51.026 real 0m9.266s 00:08:51.026 user 0m28.847s 00:08:51.026 sys 0m1.138s 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.026 ************************************ 00:08:51.026 END TEST nvmf_delete_subsystem 00:08:51.026 ************************************ 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.026 ************************************ 00:08:51.026 START TEST nvmf_host_management 00:08:51.026 ************************************ 00:08:51.026 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.286 * Looking for test storage... 00:08:51.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.286 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:51.287 Cannot find device "nvmf_tgt_br" 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.287 Cannot find device "nvmf_tgt_br2" 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:51.287 Cannot find device "nvmf_tgt_br" 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:51.287 Cannot find device "nvmf_tgt_br2" 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:51.287 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:51.287 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.287 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:51.287 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:51.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:08:51.547 00:08:51.547 --- 10.0.0.2 ping statistics --- 00:08:51.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.547 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:51.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:08:51.547 00:08:51.547 --- 10.0.0.3 ping statistics --- 00:08:51.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.547 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:08:51.547 00:08:51.547 --- 10.0.0.1 ping statistics --- 00:08:51.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.547 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=68226 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 68226 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 68226 ']' 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.548 07:23:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:51.808 [2024-07-25 07:23:24.320451] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:51.808 [2024-07-25 07:23:24.320514] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.808 [2024-07-25 07:23:24.465502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.067 [2024-07-25 07:23:24.559286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.067 [2024-07-25 07:23:24.559333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.067 [2024-07-25 07:23:24.559339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.067 [2024-07-25 07:23:24.559344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.067 [2024-07-25 07:23:24.559348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.067 [2024-07-25 07:23:24.559681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.067 [2024-07-25 07:23:24.559875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.067 [2024-07-25 07:23:24.560562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.067 [2024-07-25 07:23:24.560563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 [2024-07-25 07:23:25.253385] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 Malloc0 00:08:52.634 [2024-07-25 07:23:25.326142] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.634 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=68298 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 68298 /var/tmp/bdevperf.sock 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 68298 ']' 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:52.894 { 00:08:52.894 "params": { 00:08:52.894 "name": "Nvme$subsystem", 00:08:52.894 "trtype": "$TEST_TRANSPORT", 00:08:52.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.894 "adrfam": "ipv4", 00:08:52.894 "trsvcid": "$NVMF_PORT", 00:08:52.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.894 "hdgst": ${hdgst:-false}, 00:08:52.894 "ddgst": ${ddgst:-false} 00:08:52.894 }, 00:08:52.894 "method": "bdev_nvme_attach_controller" 00:08:52.894 } 00:08:52.894 EOF 00:08:52.894 )") 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:52.894 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:52.894 "params": { 00:08:52.894 "name": "Nvme0", 00:08:52.894 "trtype": "tcp", 00:08:52.894 "traddr": "10.0.0.2", 00:08:52.894 "adrfam": "ipv4", 00:08:52.894 "trsvcid": "4420", 00:08:52.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:52.894 "hdgst": false, 00:08:52.894 "ddgst": false 00:08:52.894 }, 00:08:52.894 "method": "bdev_nvme_attach_controller" 00:08:52.894 }' 00:08:52.894 [2024-07-25 07:23:25.446283] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:52.894 [2024-07-25 07:23:25.446355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68298 ] 00:08:52.894 [2024-07-25 07:23:25.583491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.153 [2024-07-25 07:23:25.687892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.153 Running I/O for 10 seconds... 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1000 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1000 -ge 100 ']' 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.724 [2024-07-25 07:23:26.437164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.437975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.438970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.439007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 [2024-07-25 07:23:26.439039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e310 is same with the state(5) to be set 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.724 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.724 [2024-07-25 07:23:26.443124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.724 [2024-07-25 07:23:26.443157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.724 [2024-07-25 07:23:26.443176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.724 [2024-07-25 07:23:26.443183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.725 [2024-07-25 07:23:26.443689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.725 [2024-07-25 07:23:26.443697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.443990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.443999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.444005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.444012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.444020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.444027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.444034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.444041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.444048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.444056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.444062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.444070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.444076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.444084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.726 [2024-07-25 07:23:26.444090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.726 [2024-07-25 07:23:26.444124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:53.726 [2024-07-25 07:23:26.444178] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc02820 was disconnected and freed. reset controller. 00:08:53.726 [2024-07-25 07:23:26.445255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:53.726 task offset: 8192 on job bdev=Nvme0n1 fails 00:08:53.726 00:08:53.726 Latency(us) 00:08:53.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.726 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:53.726 Job: Nvme0n1 ended in about 0.60 seconds with error 00:08:53.726 Verification LBA range: start 0x0 length 0x400 00:08:53.726 Nvme0n1 : 0.60 1817.62 113.60 106.92 0.00 32511.77 1724.26 28503.87 00:08:53.726 =================================================================================================================== 00:08:53.726 Total : 1817.62 113.60 106.92 0.00 32511.77 1724.26 28503.87 00:08:53.726 [2024-07-25 07:23:26.447525] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.726 [2024-07-25 07:23:26.447551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc02af0 (9): Bad file descriptor 00:08:53.726 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.726 07:23:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:53.985 [2024-07-25 07:23:26.456045] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 68298 00:08:54.923 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (68298) - No such process 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:54.923 { 00:08:54.923 "params": { 00:08:54.923 "name": "Nvme$subsystem", 00:08:54.923 "trtype": "$TEST_TRANSPORT", 00:08:54.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.923 "adrfam": "ipv4", 00:08:54.923 "trsvcid": "$NVMF_PORT", 00:08:54.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.923 "hdgst": ${hdgst:-false}, 00:08:54.923 "ddgst": ${ddgst:-false} 00:08:54.923 }, 00:08:54.923 "method": "bdev_nvme_attach_controller" 00:08:54.923 } 00:08:54.923 EOF 00:08:54.923 )") 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:54.923 07:23:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:54.923 "params": { 00:08:54.923 "name": "Nvme0", 00:08:54.923 "trtype": "tcp", 00:08:54.923 "traddr": "10.0.0.2", 00:08:54.923 "adrfam": "ipv4", 00:08:54.923 "trsvcid": "4420", 00:08:54.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:54.923 "hdgst": false, 00:08:54.923 "ddgst": false 00:08:54.923 }, 00:08:54.923 "method": "bdev_nvme_attach_controller" 00:08:54.923 }' 00:08:54.923 [2024-07-25 07:23:27.519811] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:54.923 [2024-07-25 07:23:27.519894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68347 ] 00:08:54.923 [2024-07-25 07:23:27.656804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.183 [2024-07-25 07:23:27.754717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.183 Running I/O for 1 seconds... 00:08:56.562 00:08:56.562 Latency(us) 00:08:56.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.562 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:56.562 Verification LBA range: start 0x0 length 0x400 00:08:56.562 Nvme0n1 : 1.03 2059.38 128.71 0.00 0.00 30528.21 4121.04 29534.13 00:08:56.562 =================================================================================================================== 00:08:56.562 Total : 2059.38 128.71 0.00 0.00 30528.21 4121.04 29534.13 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.562 rmmod nvme_tcp 00:08:56.562 rmmod nvme_fabrics 00:08:56.562 rmmod nvme_keyring 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 68226 ']' 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 68226 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 68226 ']' 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 68226 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68226 00:08:56.562 killing process with pid 68226 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68226' 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 68226 00:08:56.562 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 68226 00:08:56.821 [2024-07-25 07:23:29.483405] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.821 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:57.082 00:08:57.082 real 0m5.869s 00:08:57.082 user 0m22.538s 00:08:57.082 sys 0m1.336s 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.082 ************************************ 00:08:57.082 END TEST nvmf_host_management 00:08:57.082 ************************************ 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.082 ************************************ 00:08:57.082 START TEST nvmf_lvol 00:08:57.082 ************************************ 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:57.082 * Looking for test storage... 00:08:57.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:57.082 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:57.083 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:57.341 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:57.341 Cannot find device "nvmf_tgt_br" 00:08:57.341 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:57.341 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:57.341 Cannot find device "nvmf_tgt_br2" 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:57.342 Cannot find device "nvmf_tgt_br" 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:57.342 Cannot find device "nvmf_tgt_br2" 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:57.342 07:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:57.342 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:57.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:57.601 00:08:57.601 --- 10.0.0.2 ping statistics --- 00:08:57.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.601 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:57.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:57.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:08:57.601 00:08:57.601 --- 10.0.0.3 ping statistics --- 00:08:57.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.601 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:57.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:57.601 00:08:57.601 --- 10.0.0.1 ping statistics --- 00:08:57.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.601 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=68555 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 68555 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 68555 ']' 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.601 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.602 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.602 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.602 07:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.602 [2024-07-25 07:23:30.174367] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:08:57.602 [2024-07-25 07:23:30.174444] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.602 [2024-07-25 07:23:30.312463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.860 [2024-07-25 07:23:30.405812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.860 [2024-07-25 07:23:30.405853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.860 [2024-07-25 07:23:30.405859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.860 [2024-07-25 07:23:30.405864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.860 [2024-07-25 07:23:30.405868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.860 [2024-07-25 07:23:30.406101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.860 [2024-07-25 07:23:30.406224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.860 [2024-07-25 07:23:30.406226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.427 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.427 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:58.427 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.427 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.427 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.427 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.427 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:58.687 [2024-07-25 07:23:31.261314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.687 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.945 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:58.945 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.204 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:59.204 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:59.462 07:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:59.721 07:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=97dcbe77-d73f-4dfa-96f6-5111611b1ddd 00:08:59.721 07:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 97dcbe77-d73f-4dfa-96f6-5111611b1ddd lvol 20 00:08:59.721 07:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bb238c96-41db-4d77-9092-1d33d31d173b 00:08:59.722 07:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.036 07:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb238c96-41db-4d77-9092-1d33d31d173b 00:09:00.295 07:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.295 [2024-07-25 07:23:32.984636] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.295 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.555 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=68697 00:09:00.555 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:00.555 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:01.498 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot bb238c96-41db-4d77-9092-1d33d31d173b MY_SNAPSHOT 00:09:01.758 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2a1ed483-c309-4401-b1ad-51a5fa4f1950 00:09:01.758 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize bb238c96-41db-4d77-9092-1d33d31d173b 30 00:09:02.017 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2a1ed483-c309-4401-b1ad-51a5fa4f1950 MY_CLONE 00:09:02.278 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8d551617-086c-4bd2-91b5-1a1f0c7aeaff 00:09:02.278 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8d551617-086c-4bd2-91b5-1a1f0c7aeaff 00:09:02.845 07:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 68697 00:09:11.056 Initializing NVMe Controllers 00:09:11.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:11.056 Controller IO queue size 128, less than required. 00:09:11.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:11.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:11.056 Initialization complete. Launching workers. 00:09:11.056 ======================================================== 00:09:11.056 Latency(us) 00:09:11.056 Device Information : IOPS MiB/s Average min max 00:09:11.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11274.00 44.04 11355.77 468.26 112814.54 00:09:11.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11122.30 43.45 11509.23 2177.40 49173.07 00:09:11.056 ======================================================== 00:09:11.056 Total : 22396.30 87.49 11431.98 468.26 112814.54 00:09:11.056 00:09:11.056 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:11.056 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bb238c96-41db-4d77-9092-1d33d31d173b 00:09:11.315 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97dcbe77-d73f-4dfa-96f6-5111611b1ddd 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.574 rmmod nvme_tcp 00:09:11.574 rmmod nvme_fabrics 00:09:11.574 rmmod nvme_keyring 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 68555 ']' 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 68555 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 68555 ']' 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 68555 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.574 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68555 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68555' 00:09:11.834 killing process with pid 68555 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 68555 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 68555 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.834 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:12.094 00:09:12.094 real 0m14.983s 00:09:12.094 user 1m3.628s 00:09:12.094 sys 0m2.971s 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:12.094 ************************************ 00:09:12.094 END TEST nvmf_lvol 00:09:12.094 ************************************ 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.094 ************************************ 00:09:12.094 START TEST nvmf_lvs_grow 00:09:12.094 ************************************ 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:12.094 * Looking for test storage... 00:09:12.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.094 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.353 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:12.354 Cannot find device "nvmf_tgt_br" 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.354 Cannot find device "nvmf_tgt_br2" 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:12.354 Cannot find device "nvmf_tgt_br" 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:12.354 Cannot find device "nvmf_tgt_br2" 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:12.354 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.354 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:12.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:09:12.614 00:09:12.614 --- 10.0.0.2 ping statistics --- 00:09:12.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.614 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:12.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:12.614 00:09:12.614 --- 10.0.0.3 ping statistics --- 00:09:12.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.614 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:09:12.614 00:09:12.614 --- 10.0.0.1 ping statistics --- 00:09:12.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.614 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=69063 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 69063 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 69063 ']' 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.614 07:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.614 [2024-07-25 07:23:45.285424] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:12.614 [2024-07-25 07:23:45.285513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.874 [2024-07-25 07:23:45.426336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.874 [2024-07-25 07:23:45.520938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.874 [2024-07-25 07:23:45.520983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.874 [2024-07-25 07:23:45.520989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.874 [2024-07-25 07:23:45.520994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.874 [2024-07-25 07:23:45.520998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.874 [2024-07-25 07:23:45.521017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.441 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.441 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:13.441 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.441 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.442 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.700 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.700 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:13.700 [2024-07-25 07:23:46.415229] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.960 ************************************ 00:09:13.960 START TEST lvs_grow_clean 00:09:13.960 ************************************ 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:13.960 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.220 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.220 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.220 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:14.479 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:14.479 07:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:14.479 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:14.479 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:14.479 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 71cbd88a-27aa-497d-bb19-106c91e7658e lvol 150 00:09:14.739 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5a022922-269d-44d8-a8c4-66fbd3a8553c 00:09:14.739 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.739 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:14.998 [2024-07-25 07:23:47.584175] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:14.998 [2024-07-25 07:23:47.584248] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:14.998 true 00:09:14.998 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:14.998 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.256 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.256 07:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.516 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a022922-269d-44d8-a8c4-66fbd3a8553c 00:09:15.516 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:15.775 [2024-07-25 07:23:48.411065] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.775 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.034 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.034 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=69223 00:09:16.034 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.035 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 69223 /var/tmp/bdevperf.sock 00:09:16.035 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 69223 ']' 00:09:16.035 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.035 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.035 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.035 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.035 07:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.035 [2024-07-25 07:23:48.691692] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:16.035 [2024-07-25 07:23:48.691767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69223 ] 00:09:16.294 [2024-07-25 07:23:48.831092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.294 [2024-07-25 07:23:48.933134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.862 07:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.862 07:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:16.862 07:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:17.121 Nvme0n1 00:09:17.121 07:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:17.381 [ 00:09:17.381 { 00:09:17.381 "aliases": [ 00:09:17.381 "5a022922-269d-44d8-a8c4-66fbd3a8553c" 00:09:17.381 ], 00:09:17.382 "assigned_rate_limits": { 00:09:17.382 "r_mbytes_per_sec": 0, 00:09:17.382 "rw_ios_per_sec": 0, 00:09:17.382 "rw_mbytes_per_sec": 0, 00:09:17.382 "w_mbytes_per_sec": 0 00:09:17.382 }, 00:09:17.382 "block_size": 4096, 00:09:17.382 "claimed": false, 00:09:17.382 "driver_specific": { 00:09:17.382 "mp_policy": "active_passive", 00:09:17.382 "nvme": [ 00:09:17.382 { 00:09:17.382 "ctrlr_data": { 00:09:17.382 "ana_reporting": false, 00:09:17.382 "cntlid": 1, 00:09:17.382 "firmware_revision": "24.09", 00:09:17.382 "model_number": "SPDK bdev Controller", 00:09:17.382 "multi_ctrlr": true, 00:09:17.382 "oacs": { 00:09:17.382 "firmware": 0, 00:09:17.382 "format": 0, 00:09:17.382 "ns_manage": 0, 00:09:17.382 "security": 0 00:09:17.382 }, 00:09:17.382 "serial_number": "SPDK0", 00:09:17.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.382 "vendor_id": "0x8086" 00:09:17.382 }, 00:09:17.382 "ns_data": { 00:09:17.382 "can_share": true, 00:09:17.382 "id": 1 00:09:17.382 }, 00:09:17.382 "trid": { 00:09:17.382 "adrfam": "IPv4", 00:09:17.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.382 "traddr": "10.0.0.2", 00:09:17.382 "trsvcid": "4420", 00:09:17.382 "trtype": "TCP" 00:09:17.382 }, 00:09:17.382 "vs": { 00:09:17.382 "nvme_version": "1.3" 00:09:17.382 } 00:09:17.382 } 00:09:17.382 ] 00:09:17.382 }, 00:09:17.382 "memory_domains": [ 00:09:17.382 { 00:09:17.382 "dma_device_id": "system", 00:09:17.382 "dma_device_type": 1 00:09:17.382 } 00:09:17.382 ], 00:09:17.382 "name": "Nvme0n1", 00:09:17.382 "num_blocks": 38912, 00:09:17.382 "product_name": "NVMe disk", 00:09:17.382 "supported_io_types": { 00:09:17.382 "abort": true, 00:09:17.382 "compare": true, 00:09:17.382 "compare_and_write": true, 00:09:17.382 "copy": true, 00:09:17.382 "flush": true, 00:09:17.382 "get_zone_info": false, 00:09:17.382 "nvme_admin": true, 00:09:17.382 "nvme_io": true, 00:09:17.382 "nvme_io_md": false, 00:09:17.382 "nvme_iov_md": false, 00:09:17.382 "read": true, 00:09:17.382 "reset": true, 00:09:17.382 "seek_data": false, 00:09:17.382 "seek_hole": false, 00:09:17.382 "unmap": true, 00:09:17.382 "write": true, 00:09:17.382 "write_zeroes": true, 00:09:17.382 "zcopy": false, 00:09:17.382 "zone_append": false, 00:09:17.382 "zone_management": false 00:09:17.382 }, 00:09:17.382 "uuid": "5a022922-269d-44d8-a8c4-66fbd3a8553c", 00:09:17.382 "zoned": false 00:09:17.382 } 00:09:17.382 ] 00:09:17.382 07:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=69276 00:09:17.382 07:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:17.382 07:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.641 Running I/O for 10 seconds... 00:09:18.578 Latency(us) 00:09:18.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.578 Nvme0n1 : 1.00 10692.00 41.77 0.00 0.00 0.00 0.00 0.00 00:09:18.578 =================================================================================================================== 00:09:18.578 Total : 10692.00 41.77 0.00 0.00 0.00 0.00 0.00 00:09:18.578 00:09:19.515 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:19.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.515 Nvme0n1 : 2.00 10724.00 41.89 0.00 0.00 0.00 0.00 0.00 00:09:19.515 =================================================================================================================== 00:09:19.515 Total : 10724.00 41.89 0.00 0.00 0.00 0.00 0.00 00:09:19.515 00:09:19.774 true 00:09:19.774 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:19.774 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:20.033 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:20.033 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:20.033 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 69276 00:09:20.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.602 Nvme0n1 : 3.00 10662.67 41.65 0.00 0.00 0.00 0.00 0.00 00:09:20.602 =================================================================================================================== 00:09:20.602 Total : 10662.67 41.65 0.00 0.00 0.00 0.00 0.00 00:09:20.602 00:09:21.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.536 Nvme0n1 : 4.00 10590.50 41.37 0.00 0.00 0.00 0.00 0.00 00:09:21.536 =================================================================================================================== 00:09:21.536 Total : 10590.50 41.37 0.00 0.00 0.00 0.00 0.00 00:09:21.536 00:09:22.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.472 Nvme0n1 : 5.00 10559.80 41.25 0.00 0.00 0.00 0.00 0.00 00:09:22.472 =================================================================================================================== 00:09:22.472 Total : 10559.80 41.25 0.00 0.00 0.00 0.00 0.00 00:09:22.472 00:09:23.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.410 Nvme0n1 : 6.00 10499.33 41.01 0.00 0.00 0.00 0.00 0.00 00:09:23.410 =================================================================================================================== 00:09:23.410 Total : 10499.33 41.01 0.00 0.00 0.00 0.00 0.00 00:09:23.410 00:09:24.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.791 Nvme0n1 : 7.00 10472.71 40.91 0.00 0.00 0.00 0.00 0.00 00:09:24.791 =================================================================================================================== 00:09:24.791 Total : 10472.71 40.91 0.00 0.00 0.00 0.00 0.00 00:09:24.791 00:09:25.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.728 Nvme0n1 : 8.00 10450.50 40.82 0.00 0.00 0.00 0.00 0.00 00:09:25.728 =================================================================================================================== 00:09:25.728 Total : 10450.50 40.82 0.00 0.00 0.00 0.00 0.00 00:09:25.728 00:09:26.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.665 Nvme0n1 : 9.00 10431.78 40.75 0.00 0.00 0.00 0.00 0.00 00:09:26.665 =================================================================================================================== 00:09:26.665 Total : 10431.78 40.75 0.00 0.00 0.00 0.00 0.00 00:09:26.665 00:09:27.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.621 Nvme0n1 : 10.00 10411.80 40.67 0.00 0.00 0.00 0.00 0.00 00:09:27.621 =================================================================================================================== 00:09:27.621 Total : 10411.80 40.67 0.00 0.00 0.00 0.00 0.00 00:09:27.621 00:09:27.621 00:09:27.621 Latency(us) 00:09:27.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.621 Nvme0n1 : 10.01 10418.63 40.70 0.00 0.00 12281.25 5866.76 25870.98 00:09:27.621 =================================================================================================================== 00:09:27.621 Total : 10418.63 40.70 0.00 0.00 12281.25 5866.76 25870.98 00:09:27.621 0 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 69223 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 69223 ']' 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 69223 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69223 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:27.621 killing process with pid 69223 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69223' 00:09:27.621 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.621 00:09:27.621 Latency(us) 00:09:27.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.621 =================================================================================================================== 00:09:27.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 69223 00:09:27.621 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 69223 00:09:27.880 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.880 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.140 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:28.140 07:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:28.399 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:28.400 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:28.400 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.659 [2024-07-25 07:24:01.262722] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:28.659 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:28.918 2024/07/25 07:24:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:71cbd88a-27aa-497d-bb19-106c91e7658e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:28.918 request: 00:09:28.918 { 00:09:28.918 "method": "bdev_lvol_get_lvstores", 00:09:28.918 "params": { 00:09:28.918 "uuid": "71cbd88a-27aa-497d-bb19-106c91e7658e" 00:09:28.918 } 00:09:28.918 } 00:09:28.918 Got JSON-RPC error response 00:09:28.918 GoRPCClient: error on JSON-RPC call 00:09:28.918 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:28.918 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:28.918 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:28.918 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:28.918 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.177 aio_bdev 00:09:29.177 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5a022922-269d-44d8-a8c4-66fbd3a8553c 00:09:29.177 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5a022922-269d-44d8-a8c4-66fbd3a8553c 00:09:29.177 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:29.177 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:29.177 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:29.177 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:29.177 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.437 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5a022922-269d-44d8-a8c4-66fbd3a8553c -t 2000 00:09:29.697 [ 00:09:29.697 { 00:09:29.697 "aliases": [ 00:09:29.697 "lvs/lvol" 00:09:29.697 ], 00:09:29.697 "assigned_rate_limits": { 00:09:29.697 "r_mbytes_per_sec": 0, 00:09:29.697 "rw_ios_per_sec": 0, 00:09:29.697 "rw_mbytes_per_sec": 0, 00:09:29.697 "w_mbytes_per_sec": 0 00:09:29.697 }, 00:09:29.697 "block_size": 4096, 00:09:29.697 "claimed": false, 00:09:29.697 "driver_specific": { 00:09:29.697 "lvol": { 00:09:29.697 "base_bdev": "aio_bdev", 00:09:29.697 "clone": false, 00:09:29.697 "esnap_clone": false, 00:09:29.697 "lvol_store_uuid": "71cbd88a-27aa-497d-bb19-106c91e7658e", 00:09:29.697 "num_allocated_clusters": 38, 00:09:29.697 "snapshot": false, 00:09:29.697 "thin_provision": false 00:09:29.697 } 00:09:29.697 }, 00:09:29.697 "name": "5a022922-269d-44d8-a8c4-66fbd3a8553c", 00:09:29.697 "num_blocks": 38912, 00:09:29.697 "product_name": "Logical Volume", 00:09:29.697 "supported_io_types": { 00:09:29.697 "abort": false, 00:09:29.697 "compare": false, 00:09:29.697 "compare_and_write": false, 00:09:29.697 "copy": false, 00:09:29.697 "flush": false, 00:09:29.697 "get_zone_info": false, 00:09:29.697 "nvme_admin": false, 00:09:29.697 "nvme_io": false, 00:09:29.697 "nvme_io_md": false, 00:09:29.697 "nvme_iov_md": false, 00:09:29.697 "read": true, 00:09:29.697 "reset": true, 00:09:29.697 "seek_data": true, 00:09:29.697 "seek_hole": true, 00:09:29.697 "unmap": true, 00:09:29.697 "write": true, 00:09:29.697 "write_zeroes": true, 00:09:29.697 "zcopy": false, 00:09:29.697 "zone_append": false, 00:09:29.697 "zone_management": false 00:09:29.697 }, 00:09:29.697 "uuid": "5a022922-269d-44d8-a8c4-66fbd3a8553c", 00:09:29.697 "zoned": false 00:09:29.697 } 00:09:29.697 ] 00:09:29.697 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:29.697 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:29.697 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:29.956 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:29.956 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:29.956 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:29.956 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:29.956 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5a022922-269d-44d8-a8c4-66fbd3a8553c 00:09:30.215 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71cbd88a-27aa-497d-bb19-106c91e7658e 00:09:30.474 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.735 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:31.002 ************************************ 00:09:31.002 END TEST lvs_grow_clean 00:09:31.002 ************************************ 00:09:31.002 00:09:31.002 real 0m17.199s 00:09:31.002 user 0m16.567s 00:09:31.002 sys 0m1.917s 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.002 ************************************ 00:09:31.002 START TEST lvs_grow_dirty 00:09:31.002 ************************************ 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:31.002 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.261 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:31.261 07:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:31.519 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=902140b1-3b77-4350-9d19-ceadf71938de 00:09:31.519 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:31.519 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:31.778 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:31.778 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:31.778 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 902140b1-3b77-4350-9d19-ceadf71938de lvol 150 00:09:32.036 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bd6000f5-addc-442b-9254-cc36f62c8236 00:09:32.036 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:32.036 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:32.295 [2024-07-25 07:24:04.839180] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:32.295 [2024-07-25 07:24:04.839261] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:32.295 true 00:09:32.295 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:32.295 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:32.563 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:32.563 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:32.821 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bd6000f5-addc-442b-9254-cc36f62c8236 00:09:32.822 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:33.080 [2024-07-25 07:24:05.737996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.080 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=69661 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 69661 /var/tmp/bdevperf.sock 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 69661 ']' 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:33.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.340 07:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.340 [2024-07-25 07:24:06.011172] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:33.340 [2024-07-25 07:24:06.011242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69661 ] 00:09:33.599 [2024-07-25 07:24:06.149009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.599 [2024-07-25 07:24:06.255717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.534 07:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.534 07:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:34.534 07:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:34.534 Nvme0n1 00:09:34.534 07:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:34.793 [ 00:09:34.793 { 00:09:34.793 "aliases": [ 00:09:34.793 "bd6000f5-addc-442b-9254-cc36f62c8236" 00:09:34.793 ], 00:09:34.793 "assigned_rate_limits": { 00:09:34.793 "r_mbytes_per_sec": 0, 00:09:34.793 "rw_ios_per_sec": 0, 00:09:34.793 "rw_mbytes_per_sec": 0, 00:09:34.793 "w_mbytes_per_sec": 0 00:09:34.793 }, 00:09:34.793 "block_size": 4096, 00:09:34.793 "claimed": false, 00:09:34.793 "driver_specific": { 00:09:34.793 "mp_policy": "active_passive", 00:09:34.793 "nvme": [ 00:09:34.793 { 00:09:34.793 "ctrlr_data": { 00:09:34.793 "ana_reporting": false, 00:09:34.793 "cntlid": 1, 00:09:34.793 "firmware_revision": "24.09", 00:09:34.793 "model_number": "SPDK bdev Controller", 00:09:34.793 "multi_ctrlr": true, 00:09:34.793 "oacs": { 00:09:34.793 "firmware": 0, 00:09:34.793 "format": 0, 00:09:34.793 "ns_manage": 0, 00:09:34.793 "security": 0 00:09:34.793 }, 00:09:34.793 "serial_number": "SPDK0", 00:09:34.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:34.793 "vendor_id": "0x8086" 00:09:34.793 }, 00:09:34.793 "ns_data": { 00:09:34.793 "can_share": true, 00:09:34.793 "id": 1 00:09:34.793 }, 00:09:34.793 "trid": { 00:09:34.793 "adrfam": "IPv4", 00:09:34.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:34.793 "traddr": "10.0.0.2", 00:09:34.793 "trsvcid": "4420", 00:09:34.793 "trtype": "TCP" 00:09:34.793 }, 00:09:34.793 "vs": { 00:09:34.793 "nvme_version": "1.3" 00:09:34.793 } 00:09:34.793 } 00:09:34.793 ] 00:09:34.793 }, 00:09:34.793 "memory_domains": [ 00:09:34.793 { 00:09:34.793 "dma_device_id": "system", 00:09:34.793 "dma_device_type": 1 00:09:34.793 } 00:09:34.793 ], 00:09:34.793 "name": "Nvme0n1", 00:09:34.793 "num_blocks": 38912, 00:09:34.793 "product_name": "NVMe disk", 00:09:34.793 "supported_io_types": { 00:09:34.793 "abort": true, 00:09:34.793 "compare": true, 00:09:34.793 "compare_and_write": true, 00:09:34.793 "copy": true, 00:09:34.793 "flush": true, 00:09:34.793 "get_zone_info": false, 00:09:34.793 "nvme_admin": true, 00:09:34.793 "nvme_io": true, 00:09:34.793 "nvme_io_md": false, 00:09:34.793 "nvme_iov_md": false, 00:09:34.793 "read": true, 00:09:34.793 "reset": true, 00:09:34.793 "seek_data": false, 00:09:34.793 "seek_hole": false, 00:09:34.793 "unmap": true, 00:09:34.793 "write": true, 00:09:34.793 "write_zeroes": true, 00:09:34.793 "zcopy": false, 00:09:34.793 "zone_append": false, 00:09:34.793 "zone_management": false 00:09:34.793 }, 00:09:34.793 "uuid": "bd6000f5-addc-442b-9254-cc36f62c8236", 00:09:34.793 "zoned": false 00:09:34.793 } 00:09:34.793 ] 00:09:34.793 07:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:34.793 07:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=69714 00:09:34.793 07:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:35.052 Running I/O for 10 seconds... 00:09:35.988 Latency(us) 00:09:35.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.988 Nvme0n1 : 1.00 10850.00 42.38 0.00 0.00 0.00 0.00 0.00 00:09:35.988 =================================================================================================================== 00:09:35.988 Total : 10850.00 42.38 0.00 0.00 0.00 0.00 0.00 00:09:35.988 00:09:36.935 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:36.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.935 Nvme0n1 : 2.00 10859.00 42.42 0.00 0.00 0.00 0.00 0.00 00:09:36.935 =================================================================================================================== 00:09:36.935 Total : 10859.00 42.42 0.00 0.00 0.00 0.00 0.00 00:09:36.935 00:09:37.195 true 00:09:37.195 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:37.195 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:37.455 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:37.455 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:37.455 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 69714 00:09:38.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.020 Nvme0n1 : 3.00 10242.33 40.01 0.00 0.00 0.00 0.00 0.00 00:09:38.020 =================================================================================================================== 00:09:38.020 Total : 10242.33 40.01 0.00 0.00 0.00 0.00 0.00 00:09:38.020 00:09:38.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.953 Nvme0n1 : 4.00 10117.50 39.52 0.00 0.00 0.00 0.00 0.00 00:09:38.953 =================================================================================================================== 00:09:38.953 Total : 10117.50 39.52 0.00 0.00 0.00 0.00 0.00 00:09:38.953 00:09:39.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.887 Nvme0n1 : 5.00 10055.80 39.28 0.00 0.00 0.00 0.00 0.00 00:09:39.887 =================================================================================================================== 00:09:39.887 Total : 10055.80 39.28 0.00 0.00 0.00 0.00 0.00 00:09:39.887 00:09:41.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.260 Nvme0n1 : 6.00 10022.83 39.15 0.00 0.00 0.00 0.00 0.00 00:09:41.260 =================================================================================================================== 00:09:41.260 Total : 10022.83 39.15 0.00 0.00 0.00 0.00 0.00 00:09:41.260 00:09:41.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.825 Nvme0n1 : 7.00 9951.14 38.87 0.00 0.00 0.00 0.00 0.00 00:09:41.825 =================================================================================================================== 00:09:41.825 Total : 9951.14 38.87 0.00 0.00 0.00 0.00 0.00 00:09:41.825 00:09:43.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.199 Nvme0n1 : 8.00 8754.50 34.20 0.00 0.00 0.00 0.00 0.00 00:09:43.199 =================================================================================================================== 00:09:43.199 Total : 8754.50 34.20 0.00 0.00 0.00 0.00 0.00 00:09:43.199 00:09:44.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.144 Nvme0n1 : 9.00 8771.78 34.26 0.00 0.00 0.00 0.00 0.00 00:09:44.144 =================================================================================================================== 00:09:44.144 Total : 8771.78 34.26 0.00 0.00 0.00 0.00 0.00 00:09:44.144 00:09:45.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.083 Nvme0n1 : 10.00 8849.00 34.57 0.00 0.00 0.00 0.00 0.00 00:09:45.083 =================================================================================================================== 00:09:45.083 Total : 8849.00 34.57 0.00 0.00 0.00 0.00 0.00 00:09:45.083 00:09:45.083 00:09:45.083 Latency(us) 00:09:45.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.083 Nvme0n1 : 10.01 8856.80 34.60 0.00 0.00 14444.80 5666.43 1018355.03 00:09:45.083 =================================================================================================================== 00:09:45.083 Total : 8856.80 34.60 0.00 0.00 14444.80 5666.43 1018355.03 00:09:45.083 0 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 69661 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 69661 ']' 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 69661 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69661 00:09:45.083 killing process with pid 69661 00:09:45.083 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.083 00:09:45.083 Latency(us) 00:09:45.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.083 =================================================================================================================== 00:09:45.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69661' 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 69661 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 69661 00:09:45.083 07:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:45.343 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:45.602 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:45.602 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 69063 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 69063 00:09:45.861 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 69063 Killed "${NVMF_APP[@]}" "$@" 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=69878 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 69878 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 69878 ']' 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.861 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.119 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.119 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.119 07:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.119 [2024-07-25 07:24:18.649156] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:46.119 [2024-07-25 07:24:18.649257] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.119 [2024-07-25 07:24:18.790137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.378 [2024-07-25 07:24:18.892844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.378 [2024-07-25 07:24:18.892889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.378 [2024-07-25 07:24:18.892896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.378 [2024-07-25 07:24:18.892901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.378 [2024-07-25 07:24:18.892906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.378 [2024-07-25 07:24:18.892932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.945 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.945 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:46.945 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:46.945 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:46.945 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.945 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.945 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.204 [2024-07-25 07:24:19.816856] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:47.204 [2024-07-25 07:24:19.817046] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:47.204 [2024-07-25 07:24:19.817149] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bd6000f5-addc-442b-9254-cc36f62c8236 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=bd6000f5-addc-442b-9254-cc36f62c8236 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:47.204 07:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:47.462 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bd6000f5-addc-442b-9254-cc36f62c8236 -t 2000 00:09:47.861 [ 00:09:47.861 { 00:09:47.861 "aliases": [ 00:09:47.861 "lvs/lvol" 00:09:47.861 ], 00:09:47.861 "assigned_rate_limits": { 00:09:47.861 "r_mbytes_per_sec": 0, 00:09:47.861 "rw_ios_per_sec": 0, 00:09:47.861 "rw_mbytes_per_sec": 0, 00:09:47.861 "w_mbytes_per_sec": 0 00:09:47.861 }, 00:09:47.861 "block_size": 4096, 00:09:47.861 "claimed": false, 00:09:47.861 "driver_specific": { 00:09:47.861 "lvol": { 00:09:47.861 "base_bdev": "aio_bdev", 00:09:47.861 "clone": false, 00:09:47.861 "esnap_clone": false, 00:09:47.861 "lvol_store_uuid": "902140b1-3b77-4350-9d19-ceadf71938de", 00:09:47.861 "num_allocated_clusters": 38, 00:09:47.861 "snapshot": false, 00:09:47.861 "thin_provision": false 00:09:47.861 } 00:09:47.861 }, 00:09:47.861 "name": "bd6000f5-addc-442b-9254-cc36f62c8236", 00:09:47.861 "num_blocks": 38912, 00:09:47.861 "product_name": "Logical Volume", 00:09:47.861 "supported_io_types": { 00:09:47.861 "abort": false, 00:09:47.861 "compare": false, 00:09:47.861 "compare_and_write": false, 00:09:47.861 "copy": false, 00:09:47.861 "flush": false, 00:09:47.861 "get_zone_info": false, 00:09:47.861 "nvme_admin": false, 00:09:47.861 "nvme_io": false, 00:09:47.861 "nvme_io_md": false, 00:09:47.861 "nvme_iov_md": false, 00:09:47.861 "read": true, 00:09:47.861 "reset": true, 00:09:47.861 "seek_data": true, 00:09:47.861 "seek_hole": true, 00:09:47.861 "unmap": true, 00:09:47.861 "write": true, 00:09:47.861 "write_zeroes": true, 00:09:47.861 "zcopy": false, 00:09:47.861 "zone_append": false, 00:09:47.861 "zone_management": false 00:09:47.861 }, 00:09:47.861 "uuid": "bd6000f5-addc-442b-9254-cc36f62c8236", 00:09:47.861 "zoned": false 00:09:47.861 } 00:09:47.861 ] 00:09:47.861 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:47.861 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:47.861 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:48.122 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:48.122 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:48.122 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:48.381 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:48.381 07:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.381 [2024-07-25 07:24:21.092152] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:48.642 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:48.901 2024/07/25 07:24:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:902140b1-3b77-4350-9d19-ceadf71938de], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:48.901 request: 00:09:48.901 { 00:09:48.901 "method": "bdev_lvol_get_lvstores", 00:09:48.901 "params": { 00:09:48.901 "uuid": "902140b1-3b77-4350-9d19-ceadf71938de" 00:09:48.901 } 00:09:48.901 } 00:09:48.901 Got JSON-RPC error response 00:09:48.901 GoRPCClient: error on JSON-RPC call 00:09:48.901 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:48.901 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:48.901 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:48.901 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:48.901 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:49.159 aio_bdev 00:09:49.159 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bd6000f5-addc-442b-9254-cc36f62c8236 00:09:49.159 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=bd6000f5-addc-442b-9254-cc36f62c8236 00:09:49.159 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:49.159 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:49.160 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:49.160 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:49.160 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:49.160 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bd6000f5-addc-442b-9254-cc36f62c8236 -t 2000 00:09:49.419 [ 00:09:49.419 { 00:09:49.419 "aliases": [ 00:09:49.419 "lvs/lvol" 00:09:49.419 ], 00:09:49.419 "assigned_rate_limits": { 00:09:49.419 "r_mbytes_per_sec": 0, 00:09:49.419 "rw_ios_per_sec": 0, 00:09:49.419 "rw_mbytes_per_sec": 0, 00:09:49.419 "w_mbytes_per_sec": 0 00:09:49.419 }, 00:09:49.419 "block_size": 4096, 00:09:49.419 "claimed": false, 00:09:49.419 "driver_specific": { 00:09:49.419 "lvol": { 00:09:49.419 "base_bdev": "aio_bdev", 00:09:49.419 "clone": false, 00:09:49.419 "esnap_clone": false, 00:09:49.419 "lvol_store_uuid": "902140b1-3b77-4350-9d19-ceadf71938de", 00:09:49.419 "num_allocated_clusters": 38, 00:09:49.419 "snapshot": false, 00:09:49.419 "thin_provision": false 00:09:49.419 } 00:09:49.419 }, 00:09:49.419 "name": "bd6000f5-addc-442b-9254-cc36f62c8236", 00:09:49.419 "num_blocks": 38912, 00:09:49.419 "product_name": "Logical Volume", 00:09:49.419 "supported_io_types": { 00:09:49.419 "abort": false, 00:09:49.419 "compare": false, 00:09:49.419 "compare_and_write": false, 00:09:49.419 "copy": false, 00:09:49.419 "flush": false, 00:09:49.419 "get_zone_info": false, 00:09:49.419 "nvme_admin": false, 00:09:49.419 "nvme_io": false, 00:09:49.419 "nvme_io_md": false, 00:09:49.419 "nvme_iov_md": false, 00:09:49.419 "read": true, 00:09:49.419 "reset": true, 00:09:49.419 "seek_data": true, 00:09:49.419 "seek_hole": true, 00:09:49.419 "unmap": true, 00:09:49.419 "write": true, 00:09:49.419 "write_zeroes": true, 00:09:49.419 "zcopy": false, 00:09:49.419 "zone_append": false, 00:09:49.419 "zone_management": false 00:09:49.419 }, 00:09:49.419 "uuid": "bd6000f5-addc-442b-9254-cc36f62c8236", 00:09:49.419 "zoned": false 00:09:49.419 } 00:09:49.419 ] 00:09:49.419 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:49.419 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:49.419 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:49.677 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:49.678 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:49.678 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:49.936 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:49.936 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bd6000f5-addc-442b-9254-cc36f62c8236 00:09:50.195 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 902140b1-3b77-4350-9d19-ceadf71938de 00:09:50.453 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:50.710 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:51.275 00:09:51.275 real 0m20.131s 00:09:51.275 user 0m40.834s 00:09:51.275 sys 0m6.721s 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:51.275 ************************************ 00:09:51.275 END TEST lvs_grow_dirty 00:09:51.275 ************************************ 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:51.275 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:51.276 nvmf_trace.0 00:09:51.276 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:51.276 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:51.276 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.276 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.558 rmmod nvme_tcp 00:09:51.558 rmmod nvme_fabrics 00:09:51.558 rmmod nvme_keyring 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 69878 ']' 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 69878 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 69878 ']' 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 69878 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69878 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:51.558 killing process with pid 69878 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69878' 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 69878 00:09:51.558 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 69878 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:51.837 00:09:51.837 real 0m39.755s 00:09:51.837 user 1m3.672s 00:09:51.837 sys 0m9.469s 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:51.837 ************************************ 00:09:51.837 END TEST nvmf_lvs_grow 00:09:51.837 ************************************ 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.837 ************************************ 00:09:51.837 START TEST nvmf_bdev_io_wait 00:09:51.837 ************************************ 00:09:51.837 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:52.096 * Looking for test storage... 00:09:52.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.096 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:52.097 Cannot find device "nvmf_tgt_br" 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.097 Cannot find device "nvmf_tgt_br2" 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:52.097 Cannot find device "nvmf_tgt_br" 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:52.097 Cannot find device "nvmf_tgt_br2" 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:52.097 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:52.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:52.356 00:09:52.356 --- 10.0.0.2 ping statistics --- 00:09:52.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.356 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:52.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:09:52.356 00:09:52.356 --- 10.0.0.3 ping statistics --- 00:09:52.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.356 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:52.356 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:52.356 00:09:52.356 --- 10.0.0.1 ping statistics --- 00:09:52.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.357 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=70297 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 70297 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 70297 ']' 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.357 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.357 [2024-07-25 07:24:24.960571] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:52.357 [2024-07-25 07:24:24.960672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.615 [2024-07-25 07:24:25.087873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.615 [2024-07-25 07:24:25.201469] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.615 [2024-07-25 07:24:25.201647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.615 [2024-07-25 07:24:25.201702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.615 [2024-07-25 07:24:25.201754] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.615 [2024-07-25 07:24:25.201784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.615 [2024-07-25 07:24:25.201915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.615 [2024-07-25 07:24:25.202027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.615 [2024-07-25 07:24:25.203264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.615 [2024-07-25 07:24:25.203265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.181 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.181 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:53.181 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.181 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.181 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.440 07:24:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 [2024-07-25 07:24:26.013326] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 Malloc0 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 [2024-07-25 07:24:26.087093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=70350 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.440 { 00:09:53.440 "params": { 00:09:53.440 "name": "Nvme$subsystem", 00:09:53.440 "trtype": "$TEST_TRANSPORT", 00:09:53.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.440 "adrfam": "ipv4", 00:09:53.440 "trsvcid": "$NVMF_PORT", 00:09:53.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.440 "hdgst": ${hdgst:-false}, 00:09:53.440 "ddgst": ${ddgst:-false} 00:09:53.440 }, 00:09:53.440 "method": "bdev_nvme_attach_controller" 00:09:53.440 } 00:09:53.440 EOF 00:09:53.440 )") 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=70352 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=70356 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.440 { 00:09:53.440 "params": { 00:09:53.440 "name": "Nvme$subsystem", 00:09:53.440 "trtype": "$TEST_TRANSPORT", 00:09:53.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.440 "adrfam": "ipv4", 00:09:53.440 "trsvcid": "$NVMF_PORT", 00:09:53.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.440 "hdgst": ${hdgst:-false}, 00:09:53.440 "ddgst": ${ddgst:-false} 00:09:53.440 }, 00:09:53.440 "method": "bdev_nvme_attach_controller" 00:09:53.440 } 00:09:53.440 EOF 00:09:53.440 )") 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=70358 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.440 "params": { 00:09:53.440 "name": "Nvme1", 00:09:53.440 "trtype": "tcp", 00:09:53.440 "traddr": "10.0.0.2", 00:09:53.440 "adrfam": "ipv4", 00:09:53.440 "trsvcid": "4420", 00:09:53.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.440 "hdgst": false, 00:09:53.440 "ddgst": false 00:09:53.440 }, 00:09:53.440 "method": "bdev_nvme_attach_controller" 00:09:53.440 }' 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.440 { 00:09:53.440 "params": { 00:09:53.440 "name": "Nvme$subsystem", 00:09:53.440 "trtype": "$TEST_TRANSPORT", 00:09:53.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.440 "adrfam": "ipv4", 00:09:53.440 "trsvcid": "$NVMF_PORT", 00:09:53.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.440 "hdgst": ${hdgst:-false}, 00:09:53.440 "ddgst": ${ddgst:-false} 00:09:53.440 }, 00:09:53.440 "method": "bdev_nvme_attach_controller" 00:09:53.440 } 00:09:53.440 EOF 00:09:53.440 )") 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.440 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.441 { 00:09:53.441 "params": { 00:09:53.441 "name": "Nvme$subsystem", 00:09:53.441 "trtype": "$TEST_TRANSPORT", 00:09:53.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.441 "adrfam": "ipv4", 00:09:53.441 "trsvcid": "$NVMF_PORT", 00:09:53.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.441 "hdgst": ${hdgst:-false}, 00:09:53.441 "ddgst": ${ddgst:-false} 00:09:53.441 }, 00:09:53.441 "method": "bdev_nvme_attach_controller" 00:09:53.441 } 00:09:53.441 EOF 00:09:53.441 )") 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.441 "params": { 00:09:53.441 "name": "Nvme1", 00:09:53.441 "trtype": "tcp", 00:09:53.441 "traddr": "10.0.0.2", 00:09:53.441 "adrfam": "ipv4", 00:09:53.441 "trsvcid": "4420", 00:09:53.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.441 "hdgst": false, 00:09:53.441 "ddgst": false 00:09:53.441 }, 00:09:53.441 "method": "bdev_nvme_attach_controller" 00:09:53.441 }' 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.441 "params": { 00:09:53.441 "name": "Nvme1", 00:09:53.441 "trtype": "tcp", 00:09:53.441 "traddr": "10.0.0.2", 00:09:53.441 "adrfam": "ipv4", 00:09:53.441 "trsvcid": "4420", 00:09:53.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.441 "hdgst": false, 00:09:53.441 "ddgst": false 00:09:53.441 }, 00:09:53.441 "method": "bdev_nvme_attach_controller" 00:09:53.441 }' 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.441 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.441 "params": { 00:09:53.441 "name": "Nvme1", 00:09:53.441 "trtype": "tcp", 00:09:53.441 "traddr": "10.0.0.2", 00:09:53.441 "adrfam": "ipv4", 00:09:53.441 "trsvcid": "4420", 00:09:53.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.441 "hdgst": false, 00:09:53.441 "ddgst": false 00:09:53.441 }, 00:09:53.441 "method": "bdev_nvme_attach_controller" 00:09:53.441 }' 00:09:53.441 [2024-07-25 07:24:26.154999] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:53.441 [2024-07-25 07:24:26.155099] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:53.441 [2024-07-25 07:24:26.162857] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:53.441 [2024-07-25 07:24:26.162948] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:53.441 [2024-07-25 07:24:26.167505] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:53.441 [2024-07-25 07:24:26.167603] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:53.700 [2024-07-25 07:24:26.182947] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:53.700 [2024-07-25 07:24:26.183035] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:53.700 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 70350 00:09:53.700 [2024-07-25 07:24:26.355675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.958 [2024-07-25 07:24:26.435333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.958 [2024-07-25 07:24:26.464682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:53.958 [2024-07-25 07:24:26.507244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.958 [2024-07-25 07:24:26.523066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.958 [2024-07-25 07:24:26.594961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.958 [2024-07-25 07:24:26.596938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:53.958 Running I/O for 1 seconds... 00:09:53.958 Running I/O for 1 seconds... 00:09:54.216 [2024-07-25 07:24:26.708350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:54.216 Running I/O for 1 seconds... 00:09:54.216 Running I/O for 1 seconds... 00:09:55.149 00:09:55.149 Latency(us) 00:09:55.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.149 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:55.149 Nvme1n1 : 1.00 177376.52 692.88 0.00 0.00 718.75 348.79 1309.29 00:09:55.149 =================================================================================================================== 00:09:55.149 Total : 177376.52 692.88 0.00 0.00 718.75 348.79 1309.29 00:09:55.149 00:09:55.149 Latency(us) 00:09:55.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.149 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:55.149 Nvme1n1 : 1.01 6536.94 25.53 0.00 0.00 19462.29 8699.98 21635.47 00:09:55.150 =================================================================================================================== 00:09:55.150 Total : 6536.94 25.53 0.00 0.00 19462.29 8699.98 21635.47 00:09:55.150 00:09:55.150 Latency(us) 00:09:55.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.150 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:55.150 Nvme1n1 : 1.01 8278.39 32.34 0.00 0.00 15382.54 9272.34 23123.62 00:09:55.150 =================================================================================================================== 00:09:55.150 Total : 8278.39 32.34 0.00 0.00 15382.54 9272.34 23123.62 00:09:55.150 00:09:55.150 Latency(us) 00:09:55.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.150 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:55.150 Nvme1n1 : 1.00 9634.77 37.64 0.00 0.00 13239.86 4550.32 25069.67 00:09:55.150 =================================================================================================================== 00:09:55.150 Total : 9634.77 37.64 0.00 0.00 13239.86 4550.32 25069.67 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 70352 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 70356 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 70358 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.408 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.667 rmmod nvme_tcp 00:09:55.667 rmmod nvme_fabrics 00:09:55.667 rmmod nvme_keyring 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 70297 ']' 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 70297 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 70297 ']' 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 70297 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70297 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:55.667 killing process with pid 70297 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70297' 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 70297 00:09:55.667 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 70297 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:55.926 00:09:55.926 real 0m3.968s 00:09:55.926 user 0m18.040s 00:09:55.926 sys 0m1.756s 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.926 ************************************ 00:09:55.926 END TEST nvmf_bdev_io_wait 00:09:55.926 ************************************ 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.926 ************************************ 00:09:55.926 START TEST nvmf_queue_depth 00:09:55.926 ************************************ 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:55.926 * Looking for test storage... 00:09:55.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.926 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:55.927 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:56.185 Cannot find device "nvmf_tgt_br" 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.185 Cannot find device "nvmf_tgt_br2" 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:56.185 Cannot find device "nvmf_tgt_br" 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:56.185 Cannot find device "nvmf_tgt_br2" 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:56.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:56.185 00:09:56.185 --- 10.0.0.2 ping statistics --- 00:09:56.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.185 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:56.185 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:56.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:09:56.445 00:09:56.445 --- 10.0.0.3 ping statistics --- 00:09:56.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.445 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:09:56.445 00:09:56.445 --- 10.0.0.1 ping statistics --- 00:09:56.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.445 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=70590 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 70590 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 70590 ']' 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.445 07:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.445 [2024-07-25 07:24:29.009610] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:56.445 [2024-07-25 07:24:29.009726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.445 [2024-07-25 07:24:29.141855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.704 [2024-07-25 07:24:29.275358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.704 [2024-07-25 07:24:29.275415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.704 [2024-07-25 07:24:29.275423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.704 [2024-07-25 07:24:29.275429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.704 [2024-07-25 07:24:29.275434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.704 [2024-07-25 07:24:29.275459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.272 [2024-07-25 07:24:29.995053] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.272 07:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.272 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.530 Malloc0 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.530 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.531 [2024-07-25 07:24:30.061950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=70641 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 70641 /var/tmp/bdevperf.sock 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 70641 ']' 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.531 07:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.531 [2024-07-25 07:24:30.120914] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:09:57.531 [2024-07-25 07:24:30.121029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70641 ] 00:09:57.531 [2024-07-25 07:24:30.251145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.788 [2024-07-25 07:24:30.356775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.722 07:24:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.722 07:24:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:58.722 07:24:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:58.722 07:24:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.722 07:24:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.722 NVMe0n1 00:09:58.722 07:24:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.722 07:24:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:58.722 Running I/O for 10 seconds... 00:10:08.729 00:10:08.729 Latency(us) 00:10:08.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.729 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:08.729 Verification LBA range: start 0x0 length 0x4000 00:10:08.729 NVMe0n1 : 10.05 9798.99 38.28 0.00 0.00 104105.80 8585.50 75552.42 00:10:08.729 =================================================================================================================== 00:10:08.729 Total : 9798.99 38.28 0.00 0.00 104105.80 8585.50 75552.42 00:10:08.729 0 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 70641 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 70641 ']' 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 70641 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70641 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:08.729 killing process with pid 70641 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70641' 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 70641 00:10:08.729 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.729 00:10:08.729 Latency(us) 00:10:08.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.729 =================================================================================================================== 00:10:08.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:08.729 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 70641 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:08.988 rmmod nvme_tcp 00:10:08.988 rmmod nvme_fabrics 00:10:08.988 rmmod nvme_keyring 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 70590 ']' 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 70590 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 70590 ']' 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 70590 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:08.988 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70590 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70590' 00:10:09.253 killing process with pid 70590 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 70590 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 70590 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.253 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.524 07:24:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:09.524 00:10:09.524 real 0m13.500s 00:10:09.524 user 0m23.757s 00:10:09.524 sys 0m1.810s 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.524 ************************************ 00:10:09.524 END TEST nvmf_queue_depth 00:10:09.524 ************************************ 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.524 ************************************ 00:10:09.524 START TEST nvmf_target_multipath 00:10:09.524 ************************************ 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.524 * Looking for test storage... 00:10:09.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.524 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:09.525 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:09.784 Cannot find device "nvmf_tgt_br" 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.784 Cannot find device "nvmf_tgt_br2" 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:09.784 Cannot find device "nvmf_tgt_br" 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:09.784 Cannot find device "nvmf_tgt_br2" 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:09.784 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.043 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.043 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.043 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.043 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:10.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:10.043 00:10:10.043 --- 10.0.0.2 ping statistics --- 00:10:10.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.043 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:10.043 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:10.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:10:10.043 00:10:10.043 --- 10.0.0.3 ping statistics --- 00:10:10.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.043 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:10.044 00:10:10.044 --- 10.0.0.1 ping statistics --- 00:10:10.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.044 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=70969 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 70969 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 70969 ']' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.044 07:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.044 [2024-07-25 07:24:42.639754] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:10.044 [2024-07-25 07:24:42.639822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.044 [2024-07-25 07:24:42.768252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.302 [2024-07-25 07:24:42.872397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.302 [2024-07-25 07:24:42.872446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.302 [2024-07-25 07:24:42.872454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.302 [2024-07-25 07:24:42.872459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.302 [2024-07-25 07:24:42.872464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.302 [2024-07-25 07:24:42.872673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.302 [2024-07-25 07:24:42.873017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.302 [2024-07-25 07:24:42.873136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.302 [2024-07-25 07:24:42.873156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.871 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.871 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:10.871 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.872 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.872 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.129 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.129 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.129 [2024-07-25 07:24:43.813780] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.387 07:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:11.387 Malloc0 00:10:11.387 07:24:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:11.645 07:24:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.903 07:24:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.160 [2024-07-25 07:24:44.737386] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.160 07:24:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:12.418 [2024-07-25 07:24:44.953251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:12.418 07:24:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:12.675 07:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:12.675 07:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.675 07:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # local i=0 00:10:12.675 07:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.675 07:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:12.675 07:24:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # sleep 2 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # return 0 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:15.204 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=71112 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:15.205 07:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:15.205 [global] 00:10:15.205 thread=1 00:10:15.205 invalidate=1 00:10:15.205 rw=randrw 00:10:15.205 time_based=1 00:10:15.205 runtime=6 00:10:15.205 ioengine=libaio 00:10:15.205 direct=1 00:10:15.205 bs=4096 00:10:15.205 iodepth=128 00:10:15.205 norandommap=0 00:10:15.205 numjobs=1 00:10:15.205 00:10:15.205 verify_dump=1 00:10:15.205 verify_backlog=512 00:10:15.205 verify_state_save=0 00:10:15.205 do_verify=1 00:10:15.205 verify=crc32c-intel 00:10:15.205 [job0] 00:10:15.205 filename=/dev/nvme0n1 00:10:15.205 Could not set queue depth (nvme0n1) 00:10:15.205 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.205 fio-3.35 00:10:15.205 Starting 1 thread 00:10:15.772 07:24:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:16.337 07:24:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:16.337 07:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:17.713 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:17.713 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:17.713 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:17.713 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:17.713 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:17.972 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:17.973 07:24:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:18.911 07:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:18.912 07:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:18.912 07:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:18.912 07:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 71112 00:10:21.450 00:10:21.450 job0: (groupid=0, jobs=1): err= 0: pid=71133: Thu Jul 25 07:24:53 2024 00:10:21.450 read: IOPS=11.3k, BW=44.0MiB/s (46.2MB/s)(264MiB/6002msec) 00:10:21.450 slat (usec): min=4, max=5107, avg=50.04, stdev=213.00 00:10:21.450 clat (usec): min=326, max=14886, avg=7736.65, stdev=1406.83 00:10:21.450 lat (usec): min=428, max=14980, avg=7786.69, stdev=1415.82 00:10:21.450 clat percentiles (usec): 00:10:21.451 | 1.00th=[ 4490], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6718], 00:10:21.451 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7898], 00:10:21.451 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10159], 00:10:21.451 | 99.00th=[11863], 99.50th=[13042], 99.90th=[14091], 99.95th=[14484], 00:10:21.451 | 99.99th=[14746] 00:10:21.451 bw ( KiB/s): min=11704, max=29940, per=53.18%, avg=23976.36, stdev=5455.67, samples=11 00:10:21.451 iops : min= 2926, max= 7485, avg=5994.09, stdev=1363.92, samples=11 00:10:21.451 write: IOPS=6694, BW=26.1MiB/s (27.4MB/s)(139MiB/5324msec); 0 zone resets 00:10:21.451 slat (usec): min=8, max=2552, avg=62.26, stdev=143.97 00:10:21.451 clat (usec): min=427, max=14381, avg=6662.54, stdev=1227.50 00:10:21.451 lat (usec): min=516, max=14408, avg=6724.80, stdev=1232.33 00:10:21.451 clat percentiles (usec): 00:10:21.451 | 1.00th=[ 3195], 5.00th=[ 4686], 10.00th=[ 5342], 20.00th=[ 5866], 00:10:21.451 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6587], 60.00th=[ 6849], 00:10:21.451 | 70.00th=[ 7111], 80.00th=[ 7570], 90.00th=[ 8160], 95.00th=[ 8586], 00:10:21.451 | 99.00th=[ 9634], 99.50th=[10814], 99.90th=[13042], 99.95th=[14091], 00:10:21.451 | 99.99th=[14353] 00:10:21.451 bw ( KiB/s): min=12288, max=29397, per=89.71%, avg=24020.82, stdev=5220.42, samples=11 00:10:21.451 iops : min= 3072, max= 7349, avg=6005.18, stdev=1305.08, samples=11 00:10:21.451 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:10:21.451 lat (msec) : 2=0.14%, 4=0.95%, 10=94.60%, 20=4.27% 00:10:21.451 cpu : usr=6.00%, sys=26.20%, ctx=7169, majf=0, minf=121 00:10:21.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:21.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.451 issued rwts: total=67652,35639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.451 00:10:21.451 Run status group 0 (all jobs): 00:10:21.451 READ: bw=44.0MiB/s (46.2MB/s), 44.0MiB/s-44.0MiB/s (46.2MB/s-46.2MB/s), io=264MiB (277MB), run=6002-6002msec 00:10:21.451 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=139MiB (146MB), run=5324-5324msec 00:10:21.451 00:10:21.451 Disk stats (read/write): 00:10:21.451 nvme0n1: ios=66859/35141, merge=0/0, ticks=472533/212354, in_queue=684887, util=98.55% 00:10:21.451 07:24:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:21.451 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:21.707 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:21.708 07:24:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:23.079 07:24:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:23.079 07:24:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:23.079 07:24:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:23.079 07:24:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:23.079 07:24:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=71267 00:10:23.079 07:24:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:23.079 07:24:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:23.079 [global] 00:10:23.079 thread=1 00:10:23.079 invalidate=1 00:10:23.079 rw=randrw 00:10:23.079 time_based=1 00:10:23.079 runtime=6 00:10:23.079 ioengine=libaio 00:10:23.079 direct=1 00:10:23.079 bs=4096 00:10:23.079 iodepth=128 00:10:23.079 norandommap=0 00:10:23.079 numjobs=1 00:10:23.079 00:10:23.079 verify_dump=1 00:10:23.079 verify_backlog=512 00:10:23.079 verify_state_save=0 00:10:23.079 do_verify=1 00:10:23.079 verify=crc32c-intel 00:10:23.079 [job0] 00:10:23.079 filename=/dev/nvme0n1 00:10:23.079 Could not set queue depth (nvme0n1) 00:10:23.079 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.079 fio-3.35 00:10:23.079 Starting 1 thread 00:10:24.014 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:24.014 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:24.272 07:24:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:25.226 07:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:25.226 07:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:25.226 07:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:25.226 07:24:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:25.483 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:25.740 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:25.741 07:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:26.673 07:24:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:26.673 07:24:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:26.673 07:24:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:26.673 07:24:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 71267 00:10:29.200 00:10:29.200 job0: (groupid=0, jobs=1): err= 0: pid=71288: Thu Jul 25 07:25:01 2024 00:10:29.200 read: IOPS=12.6k, BW=49.3MiB/s (51.7MB/s)(296MiB/6004msec) 00:10:29.200 slat (nsec): min=1909, max=5332.9k, avg=41689.11, stdev=206938.08 00:10:29.200 clat (usec): min=171, max=23704, avg=7075.63, stdev=1760.56 00:10:29.200 lat (usec): min=193, max=23720, avg=7117.32, stdev=1775.21 00:10:29.200 clat percentiles (usec): 00:10:29.200 | 1.00th=[ 2278], 5.00th=[ 3916], 10.00th=[ 4752], 20.00th=[ 5932], 00:10:29.200 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7439], 00:10:29.200 | 70.00th=[ 7767], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9634], 00:10:29.200 | 99.00th=[11600], 99.50th=[12911], 99.90th=[16319], 99.95th=[19006], 00:10:29.200 | 99.99th=[21890] 00:10:29.200 bw ( KiB/s): min= 8256, max=38128, per=51.78%, avg=26147.64, stdev=8301.61, samples=11 00:10:29.200 iops : min= 2064, max= 9532, avg=6536.91, stdev=2075.40, samples=11 00:10:29.200 write: IOPS=7287, BW=28.5MiB/s (29.8MB/s)(148MiB/5205msec); 0 zone resets 00:10:29.200 slat (usec): min=3, max=3567, avg=51.66, stdev=132.52 00:10:29.200 clat (usec): min=416, max=20736, avg=5926.41, stdev=1651.33 00:10:29.200 lat (usec): min=450, max=20783, avg=5978.07, stdev=1663.76 00:10:29.200 clat percentiles (usec): 00:10:29.200 | 1.00th=[ 1893], 5.00th=[ 2900], 10.00th=[ 3490], 20.00th=[ 4490], 00:10:29.200 | 30.00th=[ 5538], 40.00th=[ 5997], 50.00th=[ 6259], 60.00th=[ 6521], 00:10:29.200 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 7373], 95.00th=[ 7832], 00:10:29.200 | 99.00th=[ 9765], 99.50th=[11469], 99.90th=[16712], 99.95th=[18482], 00:10:29.200 | 99.99th=[20317] 00:10:29.200 bw ( KiB/s): min= 8856, max=37240, per=89.65%, avg=26133.09, stdev=8006.19, samples=11 00:10:29.200 iops : min= 2214, max= 9310, avg=6533.27, stdev=2001.55, samples=11 00:10:29.200 lat (usec) : 250=0.01%, 500=0.02%, 750=0.04%, 1000=0.06% 00:10:29.200 lat (msec) : 2=0.67%, 4=7.87%, 10=88.60%, 20=2.71%, 50=0.02% 00:10:29.200 cpu : usr=4.81%, sys=23.32%, ctx=8382, majf=0, minf=96 00:10:29.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:29.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.200 issued rwts: total=75801,37933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.200 00:10:29.200 Run status group 0 (all jobs): 00:10:29.200 READ: bw=49.3MiB/s (51.7MB/s), 49.3MiB/s-49.3MiB/s (51.7MB/s-51.7MB/s), io=296MiB (310MB), run=6004-6004msec 00:10:29.200 WRITE: bw=28.5MiB/s (29.8MB/s), 28.5MiB/s-28.5MiB/s (29.8MB/s-29.8MB/s), io=148MiB (155MB), run=5205-5205msec 00:10:29.200 00:10:29.200 Disk stats (read/write): 00:10:29.200 nvme0n1: ios=74807/37348, merge=0/0, ticks=488925/202945, in_queue=691870, util=98.60% 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1217 -- # local i=0 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # return 0 00:10:29.200 07:25:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.458 rmmod nvme_tcp 00:10:29.458 rmmod nvme_fabrics 00:10:29.458 rmmod nvme_keyring 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.458 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 70969 ']' 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 70969 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 70969 ']' 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 70969 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70969 00:10:29.459 killing process with pid 70969 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70969' 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 70969 00:10:29.459 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 70969 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:29.718 ************************************ 00:10:29.718 END TEST nvmf_target_multipath 00:10:29.718 ************************************ 00:10:29.718 00:10:29.718 real 0m20.378s 00:10:29.718 user 1m20.202s 00:10:29.718 sys 0m6.026s 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.718 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.976 ************************************ 00:10:29.976 START TEST nvmf_zcopy 00:10:29.976 ************************************ 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.976 * Looking for test storage... 00:10:29.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:10:29.976 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:29.977 Cannot find device "nvmf_tgt_br" 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.977 Cannot find device "nvmf_tgt_br2" 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:29.977 Cannot find device "nvmf_tgt_br" 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:29.977 Cannot find device "nvmf_tgt_br2" 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:29.977 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:30.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:10:30.236 00:10:30.236 --- 10.0.0.2 ping statistics --- 00:10:30.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.236 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:30.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:30.236 00:10:30.236 --- 10.0.0.3 ping statistics --- 00:10:30.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.236 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:30.236 00:10:30.236 --- 10.0.0.1 ping statistics --- 00:10:30.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.236 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=71564 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 71564 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 71564 ']' 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.236 07:25:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.236 [2024-07-25 07:25:02.940432] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:30.236 [2024-07-25 07:25:02.940516] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.493 [2024-07-25 07:25:03.082652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.493 [2024-07-25 07:25:03.200026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.493 [2024-07-25 07:25:03.200130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.493 [2024-07-25 07:25:03.200144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.493 [2024-07-25 07:25:03.200153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.493 [2024-07-25 07:25:03.200161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.493 [2024-07-25 07:25:03.200195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.511 [2024-07-25 07:25:03.942696] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.511 [2024-07-25 07:25:03.958815] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.511 malloc0 00:10:31.511 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:31.512 { 00:10:31.512 "params": { 00:10:31.512 "name": "Nvme$subsystem", 00:10:31.512 "trtype": "$TEST_TRANSPORT", 00:10:31.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:31.512 "adrfam": "ipv4", 00:10:31.512 "trsvcid": "$NVMF_PORT", 00:10:31.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:31.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:31.512 "hdgst": ${hdgst:-false}, 00:10:31.512 "ddgst": ${ddgst:-false} 00:10:31.512 }, 00:10:31.512 "method": "bdev_nvme_attach_controller" 00:10:31.512 } 00:10:31.512 EOF 00:10:31.512 )") 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:31.512 07:25:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:31.512 "params": { 00:10:31.512 "name": "Nvme1", 00:10:31.512 "trtype": "tcp", 00:10:31.512 "traddr": "10.0.0.2", 00:10:31.512 "adrfam": "ipv4", 00:10:31.512 "trsvcid": "4420", 00:10:31.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:31.512 "hdgst": false, 00:10:31.512 "ddgst": false 00:10:31.512 }, 00:10:31.512 "method": "bdev_nvme_attach_controller" 00:10:31.512 }' 00:10:31.512 [2024-07-25 07:25:04.046376] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:31.512 [2024-07-25 07:25:04.046488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:10:31.512 [2024-07-25 07:25:04.184763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.770 [2024-07-25 07:25:04.311924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.770 Running I/O for 10 seconds... 00:10:41.744 00:10:41.744 Latency(us) 00:10:41.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.744 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:41.744 Verification LBA range: start 0x0 length 0x1000 00:10:41.744 Nvme1n1 : 10.01 6661.07 52.04 0.00 0.00 19158.53 2632.89 29534.13 00:10:41.744 =================================================================================================================== 00:10:41.744 Total : 6661.07 52.04 0.00 0.00 19158.53 2632.89 29534.13 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=71738 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:42.003 { 00:10:42.003 "params": { 00:10:42.003 "name": "Nvme$subsystem", 00:10:42.003 "trtype": "$TEST_TRANSPORT", 00:10:42.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.003 "adrfam": "ipv4", 00:10:42.003 "trsvcid": "$NVMF_PORT", 00:10:42.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.003 "hdgst": ${hdgst:-false}, 00:10:42.003 "ddgst": ${ddgst:-false} 00:10:42.003 }, 00:10:42.003 "method": "bdev_nvme_attach_controller" 00:10:42.003 } 00:10:42.003 EOF 00:10:42.003 )") 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:42.003 [2024-07-25 07:25:14.699672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.003 [2024-07-25 07:25:14.699721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:42.003 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:42.003 07:25:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:42.003 "params": { 00:10:42.003 "name": "Nvme1", 00:10:42.003 "trtype": "tcp", 00:10:42.003 "traddr": "10.0.0.2", 00:10:42.003 "adrfam": "ipv4", 00:10:42.003 "trsvcid": "4420", 00:10:42.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.003 "hdgst": false, 00:10:42.003 "ddgst": false 00:10:42.003 }, 00:10:42.003 "method": "bdev_nvme_attach_controller" 00:10:42.003 }' 00:10:42.003 [2024-07-25 07:25:14.711607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.003 [2024-07-25 07:25:14.711643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.003 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.003 [2024-07-25 07:25:14.719621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.003 [2024-07-25 07:25:14.719668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.003 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.003 [2024-07-25 07:25:14.727605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.003 [2024-07-25 07:25:14.727649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.003 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.003 [2024-07-25 07:25:14.735583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.003 [2024-07-25 07:25:14.735626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.747563] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:42.262 [2024-07-25 07:25:14.747574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.747623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 [2024-07-25 07:25:14.747650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71738 ] 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.759562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.759613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.771551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.771597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.783533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.783582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.795509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.795552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.807494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.807544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.819473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.819523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.831453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.831498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.843427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.843471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.851395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.851435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.863395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.863440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.871436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.871483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.879390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.879425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.887374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.887409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 [2024-07-25 07:25:14.889533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.895391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.895466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.903352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.903404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.911331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.911378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.919319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.919365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.927316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.927369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.935287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.935334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.943272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.943318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.951275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.951327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.959289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.959359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.967273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.967322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.975247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.975295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.983243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.983290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.262 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.262 [2024-07-25 07:25:14.991220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.262 [2024-07-25 07:25:14.991264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:14.999216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:14.999263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:15.007191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:15.007244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:15.015179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:15.015233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:15.021398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.521 [2024-07-25 07:25:15.027236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:15.027294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:15.039217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:15.039282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:15.051197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:15.051271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:15.063147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:15.063204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.521 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.521 [2024-07-25 07:25:15.075150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.521 [2024-07-25 07:25:15.075229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.087139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.087216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.099092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.099179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.111072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.111147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.123093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.123161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.135091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.135187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.147133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.147202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.159069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.159147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.175071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.175154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.187046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.187110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.199079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.199154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 Running I/O for 5 seconds... 00:10:42.522 [2024-07-25 07:25:15.211092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.211163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.233935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.234030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.522 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.522 [2024-07-25 07:25:15.251855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.522 [2024-07-25 07:25:15.251934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.269706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.269780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.285439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.285508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.299377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.299437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.314014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.314078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.330047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.330101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.340215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.340279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.350702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.350764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.362713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.362772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.371710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.371766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.383055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.383134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.393275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.393330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.404404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.404467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.414641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.781 [2024-07-25 07:25:15.414701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.781 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.781 [2024-07-25 07:25:15.425149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.425211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.435007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.435082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.445095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.445176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.454824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.454886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.464608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.464671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.474605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.474669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.484653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.484717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.494491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.494557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.782 [2024-07-25 07:25:15.504419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.782 [2024-07-25 07:25:15.504486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.782 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.517131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.517199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.526692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.526752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.536515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.536583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.546525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.546594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.556111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.556187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.565843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.565908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.575796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.575859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.585955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.586029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.595871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.595930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.605945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.606011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.616393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.616446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.627731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.627789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.639088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.639166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.650391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.650455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.661790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.661859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.673372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.673431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.685012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.685088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.696819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.696886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.707981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.708038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.719537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.719597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.728860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.728920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.738707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.738769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.748403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.748468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.758417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.758478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.040 [2024-07-25 07:25:15.768433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.040 [2024-07-25 07:25:15.768501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.040 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.299 [2024-07-25 07:25:15.778307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.299 [2024-07-25 07:25:15.778364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.299 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.299 [2024-07-25 07:25:15.788254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.299 [2024-07-25 07:25:15.788312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.299 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.299 [2024-07-25 07:25:15.798074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.299 [2024-07-25 07:25:15.798144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.299 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.299 [2024-07-25 07:25:15.807819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.299 [2024-07-25 07:25:15.807883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.299 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.299 [2024-07-25 07:25:15.817669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.299 [2024-07-25 07:25:15.817737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.299 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.299 [2024-07-25 07:25:15.827591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.299 [2024-07-25 07:25:15.827659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.299 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.299 [2024-07-25 07:25:15.837763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.837827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.849151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.849217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.858425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.858481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.868048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.868103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.877672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.877728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.887308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.887360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.896885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.896937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.906559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.906613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.916168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.916224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.926041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.926095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.935928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.935993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.945735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.945801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.955489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.955556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.965128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.965190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.974921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.974985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.985024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.985086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:15.994752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:15.994816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:16.004679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:16.004746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:16.014579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:16.014645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.300 [2024-07-25 07:25:16.024413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.300 [2024-07-25 07:25:16.024477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.300 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.558 [2024-07-25 07:25:16.034423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.034485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.044753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.044824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.054699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.054766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.064467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.064534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.074288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.074350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.084029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.084094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.093835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.093903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.104284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.104348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.115953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.116021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.125267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.125331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.135412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.135480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.145336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.145398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.156841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.156906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.166112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.166184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.176195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.176265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.186393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.186458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.197736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.197800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.206598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.206655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.216698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.216760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.226410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.226468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.236186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.236250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.246340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.246398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.257761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.257822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.559 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.559 [2024-07-25 07:25:16.266409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.559 [2024-07-25 07:25:16.266465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.560 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.560 [2024-07-25 07:25:16.276547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.560 [2024-07-25 07:25:16.276607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.560 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.560 [2024-07-25 07:25:16.286210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.560 [2024-07-25 07:25:16.286284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.560 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.818 [2024-07-25 07:25:16.296030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.818 [2024-07-25 07:25:16.296094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.305702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.305763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.315341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.315401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.324871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.324929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.334470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.334529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.344541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.344604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.354800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.354875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.365021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.365097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.374700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.374759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.384350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.384411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.394082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.394165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.403789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.403852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.413411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.413471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.423136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.423220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.432736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.432798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.442592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.442659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.452334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.452400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.462307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.462368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.472092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.472165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.482385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.482447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.492062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.492146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.502371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.502434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.512257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.512320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.521909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.521971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.531507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.531567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.541437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.541501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.819 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.819 [2024-07-25 07:25:16.551447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.819 [2024-07-25 07:25:16.551508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.562354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.562415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.572139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.572206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.582087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.582151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.591846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.591901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.601443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.601492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.610846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.610896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.621633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.621689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.634514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.634557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.643700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.643743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.653233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.653284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.667442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.667496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.684667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.684754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.694598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.102 [2024-07-25 07:25:16.694663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.102 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.102 [2024-07-25 07:25:16.705604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.705695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.718558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.718607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.728202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.728262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.739349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.739407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.751407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.751468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.761216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.761274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.771042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.771096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.780706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.780760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.794379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.794441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.802949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.803003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.103 [2024-07-25 07:25:16.814836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.103 [2024-07-25 07:25:16.814897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.103 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.826339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.826399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.834817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.834873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.846707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.846770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.857698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.857780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.869165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.869239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.881060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.881156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.894252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.894325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.904922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.904997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.916431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.916510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.927913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.927992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.940238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.940324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.952946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.953034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.965656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.965736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.977501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.977582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:16.989433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:16.989512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.369 2024/07/25 07:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.369 [2024-07-25 07:25:17.001642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.369 [2024-07-25 07:25:17.001721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.014598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.014687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.026664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.026744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.038079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.038187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.051690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.051756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.062707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.062779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.074021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.074093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.086172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.086251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.370 [2024-07-25 07:25:17.098186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.370 [2024-07-25 07:25:17.098257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.370 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.109933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.110016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.123285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.123343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.134843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.134899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.146820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.146894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.158709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.158785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.170441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.170506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.181893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.181953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.193506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.193579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.205251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.205327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.216460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.216535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.628 [2024-07-25 07:25:17.227786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.628 [2024-07-25 07:25:17.227861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.628 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.240004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.240081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.252471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.252551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.265055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.265153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.276700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.276761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.288197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.288252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.297366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.297416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.306786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.306837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.316302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.316358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.325971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.326031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.335660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.335712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.345010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.345061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.629 [2024-07-25 07:25:17.354412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.629 [2024-07-25 07:25:17.354458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.629 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.886 [2024-07-25 07:25:17.364034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.886 [2024-07-25 07:25:17.364090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.886 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.886 [2024-07-25 07:25:17.373547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.886 [2024-07-25 07:25:17.373595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.886 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.886 [2024-07-25 07:25:17.382855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.886 [2024-07-25 07:25:17.382903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.886 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.886 [2024-07-25 07:25:17.392455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.886 [2024-07-25 07:25:17.392511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.886 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.886 [2024-07-25 07:25:17.402053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.402125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.412035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.412095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.421742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.421805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.431355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.431411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.441328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.441388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.450832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.450890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.460376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.460436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.470016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.470072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.479668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.479726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.489164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.489222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.499044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.499102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.508705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.508764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.518490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.518551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.527885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.527947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.537404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.537463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.546985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.547044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.556629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.556684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.566150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.566206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.575540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.575595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.585010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.585070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.594610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.594668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.604372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.604433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.887 [2024-07-25 07:25:17.614254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.887 [2024-07-25 07:25:17.614317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.887 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.624031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.624095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.633757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.633817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.643538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.643597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.653910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.653960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.666839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.666902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.676160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.676219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.685736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.685791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.695294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.695350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.704991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.705045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.714988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.715043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.724384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.724435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.741242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.741293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.758568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.758620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.769509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.769555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.785700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.785745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.801956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.802015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.818735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.818788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.835887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.835939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.852276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.852323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.146 [2024-07-25 07:25:17.869179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.146 [2024-07-25 07:25:17.869228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.146 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.404 [2024-07-25 07:25:17.884641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.404 [2024-07-25 07:25:17.884688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:17.901733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:17.901784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:17.912815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:17.912860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:17.929458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:17.929509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:17.945425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:17.945477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:17.957338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:17.957388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:17.973779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:17.973830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:17.984835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:17.984878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.004893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.004953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.016253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.016312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.028078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.028137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.039631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.039672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.052857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.052903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.063439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.063476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.074166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.074205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.085366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.085412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.096473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.096518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.107726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.107769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.119130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.119175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.405 [2024-07-25 07:25:18.134608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.405 [2024-07-25 07:25:18.134657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.405 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.665 [2024-07-25 07:25:18.145885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.665 [2024-07-25 07:25:18.145936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.665 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.665 [2024-07-25 07:25:18.157329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.665 [2024-07-25 07:25:18.157376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.665 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.665 [2024-07-25 07:25:18.171086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.665 [2024-07-25 07:25:18.171146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.665 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.665 [2024-07-25 07:25:18.182182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.665 [2024-07-25 07:25:18.182226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.665 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.665 [2024-07-25 07:25:18.193487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.665 [2024-07-25 07:25:18.193529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.665 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.665 [2024-07-25 07:25:18.206769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.665 [2024-07-25 07:25:18.206817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.665 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.665 [2024-07-25 07:25:18.217770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.217820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.233567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.233624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.250706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.250762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.265972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.266033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.280578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.280625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.297466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.297509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.314618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.314664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.330246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.330283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.347842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.347886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.362970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.363014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.374060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.374104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.666 [2024-07-25 07:25:18.390850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.666 [2024-07-25 07:25:18.390890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.666 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.406891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.406933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.424341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.424379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.435147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.435184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.445542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.445575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.456567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.456603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.464438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.464471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.476449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.476484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.488185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.488224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.497049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.497086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.506570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.506606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.516287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.516322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.525475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.525508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.534833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.534866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.544192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.544225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.553460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.553492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.562765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.562799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.572078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.572110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.581154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.581185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.590227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.590257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.599429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.599459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.608740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.608770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.927 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.927 [2024-07-25 07:25:18.617781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.927 [2024-07-25 07:25:18.617812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.928 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.928 [2024-07-25 07:25:18.626948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.928 [2024-07-25 07:25:18.626978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.928 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.928 [2024-07-25 07:25:18.636298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.928 [2024-07-25 07:25:18.636328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.928 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.928 [2024-07-25 07:25:18.645267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.928 [2024-07-25 07:25:18.645297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.928 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.928 [2024-07-25 07:25:18.654609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.928 [2024-07-25 07:25:18.654639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.928 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.664026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.664058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.673343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.673380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.682829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.682869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.692498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.692535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.701527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.701562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.711160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.711192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.724766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.724804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.739520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.739557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.754577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.754616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.766112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.766163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.781892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.781938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.799186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.799230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.815211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.815251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.832273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.832320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.843499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.843537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.188 [2024-07-25 07:25:18.852348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.188 [2024-07-25 07:25:18.852382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.188 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.189 [2024-07-25 07:25:18.861866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.189 [2024-07-25 07:25:18.861908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.189 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.189 [2024-07-25 07:25:18.871987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.189 [2024-07-25 07:25:18.872033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.189 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.189 [2024-07-25 07:25:18.883609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.189 [2024-07-25 07:25:18.883647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.189 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.189 [2024-07-25 07:25:18.893739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.189 [2024-07-25 07:25:18.893789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.189 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.189 [2024-07-25 07:25:18.908196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.189 [2024-07-25 07:25:18.908238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.189 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.450 [2024-07-25 07:25:18.923741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.450 [2024-07-25 07:25:18.923781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.450 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.450 [2024-07-25 07:25:18.941571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.450 [2024-07-25 07:25:18.941612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.450 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.450 [2024-07-25 07:25:18.956931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.450 [2024-07-25 07:25:18.956968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.450 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.450 [2024-07-25 07:25:18.974765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.450 [2024-07-25 07:25:18.974809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:18.989383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:18.989426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.006849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.006912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.021299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.021363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.038239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.038292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.054650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.054702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.063767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.063811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.077618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.077675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.095016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.095076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.110511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.110581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.128088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.128146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.139268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.139310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.147464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.147502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.159013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.159057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.168234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.168277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.451 [2024-07-25 07:25:19.179434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.451 [2024-07-25 07:25:19.179473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.451 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.710 [2024-07-25 07:25:19.195678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.710 [2024-07-25 07:25:19.195727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.710 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.710 [2024-07-25 07:25:19.212634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.710 [2024-07-25 07:25:19.212681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.710 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.710 [2024-07-25 07:25:19.229514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.710 [2024-07-25 07:25:19.229559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.710 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.710 [2024-07-25 07:25:19.238859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.710 [2024-07-25 07:25:19.238902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.710 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.710 [2024-07-25 07:25:19.248252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.248292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.261939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.261986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.278359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.278407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.295394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.295439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.306214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.306257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.314892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.314934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.331305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.331351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.348128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.348184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.365296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.365346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.382415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.382465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.399062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.399124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.415718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.415769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.711 [2024-07-25 07:25:19.432258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.711 [2024-07-25 07:25:19.432315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.711 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.449286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.449341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.465587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.465649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.482735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.482788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.498579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.498631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.515619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.515669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.526860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.526908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.537248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.537295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.547541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.547594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.555893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.555936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.567848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.567895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.584930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.584980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.600201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.600272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.611905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.611953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.620869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.620912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.630559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.630601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.640009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.640052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.649537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.649580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.659108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.659161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.668835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.668877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.678289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.678326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.687603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.687637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.971 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.971 [2024-07-25 07:25:19.701058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.971 [2024-07-25 07:25:19.701101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.718693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.718742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.733316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.733359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.749567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.749613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.766613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.766660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.783106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.783163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.799540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.799588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.817549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.817599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.833773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.231 [2024-07-25 07:25:19.833818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.231 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.231 [2024-07-25 07:25:19.850799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.850853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.866844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.866901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.878145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.878203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.886777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.886826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.898061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.898135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.907422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.907481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.919155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.919209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.928282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.928331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.939533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.939586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.948783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.948837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.232 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.232 [2024-07-25 07:25:19.960775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.232 [2024-07-25 07:25:19.960831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:19.972921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:19.972978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:19.981835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:19.981876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:19.992877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:19.992927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.001861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.001909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.011339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.011382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.020998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.021045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.030414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.030464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.039985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.040028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.049834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.049881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.059533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.059579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.489 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.489 [2024-07-25 07:25:20.069365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.489 [2024-07-25 07:25:20.069419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.078927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.078975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.088606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.088650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.098186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.098230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.111164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.111217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.128520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.128579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.144406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.144465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.162185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.162245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.178147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.178202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.195553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.195608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 [2024-07-25 07:25:20.209247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.209304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.490 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.490 00:10:47.490 Latency(us) 00:10:47.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.490 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:47.490 Nvme1n1 : 5.01 12638.15 98.74 0.00 0.00 10115.27 4349.99 32968.33 00:10:47.490 =================================================================================================================== 00:10:47.490 Total : 12638.15 98.74 0.00 0.00 10115.27 4349.99 32968.33 00:10:47.490 [2024-07-25 07:25:20.218963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.490 [2024-07-25 07:25:20.219003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.230960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.231003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.242917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.242952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.254901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.254940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.266887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.266925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.278874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.278918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.290853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.290889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.302827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.302865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.314794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.314825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.326768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.326796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.338767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.338806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.350759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.350802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.748 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.748 [2024-07-25 07:25:20.362711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.748 [2024-07-25 07:25:20.362742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.749 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.749 [2024-07-25 07:25:20.370680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.749 [2024-07-25 07:25:20.370704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.749 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.749 [2024-07-25 07:25:20.382713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.749 [2024-07-25 07:25:20.382753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.749 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.749 [2024-07-25 07:25:20.390662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.749 [2024-07-25 07:25:20.390689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.749 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.749 [2024-07-25 07:25:20.398645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.749 [2024-07-25 07:25:20.398671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.749 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.749 [2024-07-25 07:25:20.406632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.749 [2024-07-25 07:25:20.406658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.749 2024/07/25 07:25:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.749 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (71738) - No such process 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 71738 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.749 delay0 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.749 07:25:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:48.007 [2024-07-25 07:25:20.609483] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:54.569 Initializing NVMe Controllers 00:10:54.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:54.570 Initialization complete. Launching workers. 00:10:54.570 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 86 00:10:54.570 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 373, failed to submit 33 00:10:54.570 success 200, unsuccess 173, failed 0 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.570 rmmod nvme_tcp 00:10:54.570 rmmod nvme_fabrics 00:10:54.570 rmmod nvme_keyring 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 71564 ']' 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 71564 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 71564 ']' 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 71564 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71564 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:54.570 killing process with pid 71564 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71564' 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 71564 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 71564 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:54.570 00:10:54.570 real 0m24.510s 00:10:54.570 user 0m40.825s 00:10:54.570 sys 0m5.598s 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.570 07:25:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.570 ************************************ 00:10:54.570 END TEST nvmf_zcopy 00:10:54.570 ************************************ 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.570 ************************************ 00:10:54.570 START TEST nvmf_nmic 00:10:54.570 ************************************ 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:54.570 * Looking for test storage... 00:10:54.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.570 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:54.571 Cannot find device "nvmf_tgt_br" 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.571 Cannot find device "nvmf_tgt_br2" 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:54.571 Cannot find device "nvmf_tgt_br" 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:54.571 Cannot find device "nvmf_tgt_br2" 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.571 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:54.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:54.829 00:10:54.829 --- 10.0.0.2 ping statistics --- 00:10:54.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.829 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:54.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:54.829 00:10:54.829 --- 10.0.0.3 ping statistics --- 00:10:54.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.829 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:54.829 00:10:54.829 --- 10.0.0.1 ping statistics --- 00:10:54.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.829 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=72054 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 72054 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 72054 ']' 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.829 07:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.829 [2024-07-25 07:25:27.423825] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:10:54.829 [2024-07-25 07:25:27.423937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.829 [2024-07-25 07:25:27.556109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.106 [2024-07-25 07:25:27.693169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.106 [2024-07-25 07:25:27.693253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.106 [2024-07-25 07:25:27.693266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.106 [2024-07-25 07:25:27.693275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.106 [2024-07-25 07:25:27.693282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.106 [2024-07-25 07:25:27.693405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.106 [2024-07-25 07:25:27.693549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.106 [2024-07-25 07:25:27.693653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.106 [2024-07-25 07:25:27.693658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 [2024-07-25 07:25:28.546351] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 Malloc0 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 [2024-07-25 07:25:28.622936] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.059 test case1: single bdev can't be used in multiple subsystems 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 [2024-07-25 07:25:28.646713] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:56.059 [2024-07-25 07:25:28.646776] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:56.059 [2024-07-25 07:25:28.646790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.059 2024/07/25 07:25:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.059 request: 00:10:56.059 { 00:10:56.059 "method": "nvmf_subsystem_add_ns", 00:10:56.059 "params": { 00:10:56.059 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:56.059 "namespace": { 00:10:56.059 "bdev_name": "Malloc0", 00:10:56.059 "no_auto_visible": false 00:10:56.059 } 00:10:56.059 } 00:10:56.059 } 00:10:56.059 Got JSON-RPC error response 00:10:56.059 GoRPCClient: error on JSON-RPC call 00:10:56.059 Adding namespace failed - expected result. 00:10:56.059 test case2: host connect to nvmf target in multiple paths 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 [2024-07-25 07:25:28.654916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.059 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.317 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:56.317 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.317 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:10:56.317 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.317 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:56.317 07:25:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:10:58.842 07:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:58.842 07:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:58.842 07:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.842 07:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:58.842 07:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.842 07:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:10:58.842 07:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.842 [global] 00:10:58.842 thread=1 00:10:58.842 invalidate=1 00:10:58.842 rw=write 00:10:58.842 time_based=1 00:10:58.842 runtime=1 00:10:58.842 ioengine=libaio 00:10:58.842 direct=1 00:10:58.842 bs=4096 00:10:58.842 iodepth=1 00:10:58.842 norandommap=0 00:10:58.842 numjobs=1 00:10:58.842 00:10:58.842 verify_dump=1 00:10:58.842 verify_backlog=512 00:10:58.842 verify_state_save=0 00:10:58.842 do_verify=1 00:10:58.842 verify=crc32c-intel 00:10:58.842 [job0] 00:10:58.842 filename=/dev/nvme0n1 00:10:58.842 Could not set queue depth (nvme0n1) 00:10:58.842 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.842 fio-3.35 00:10:58.842 Starting 1 thread 00:10:59.774 00:10:59.774 job0: (groupid=0, jobs=1): err= 0: pid=72169: Thu Jul 25 07:25:32 2024 00:10:59.774 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:59.774 slat (nsec): min=8480, max=59860, avg=11049.67, stdev=2658.83 00:10:59.774 clat (usec): min=99, max=202, avg=120.81, stdev=13.29 00:10:59.774 lat (usec): min=108, max=212, avg=131.86, stdev=14.11 00:10:59.774 clat percentiles (usec): 00:10:59.774 | 1.00th=[ 104], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 112], 00:10:59.774 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:10:59.774 | 70.00th=[ 123], 80.00th=[ 128], 90.00th=[ 141], 95.00th=[ 151], 00:10:59.774 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 188], 00:10:59.774 | 99.99th=[ 204] 00:10:59.774 write: IOPS=4211, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1001msec); 0 zone resets 00:10:59.774 slat (usec): min=12, max=127, avg=17.51, stdev= 6.62 00:10:59.774 clat (usec): min=73, max=703, avg=89.14, stdev=15.28 00:10:59.774 lat (usec): min=86, max=724, avg=106.65, stdev=18.25 00:10:59.774 clat percentiles (usec): 00:10:59.774 | 1.00th=[ 77], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:10:59.774 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:10:59.774 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 109], 00:10:59.774 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 184], 99.95th=[ 269], 00:10:59.774 | 99.99th=[ 701] 00:10:59.774 bw ( KiB/s): min=16592, max=16592, per=98.49%, avg=16592.00, stdev= 0.00, samples=1 00:10:59.774 iops : min= 4148, max= 4148, avg=4148.00, stdev= 0.00, samples=1 00:10:59.774 lat (usec) : 100=44.77%, 250=55.20%, 500=0.02%, 750=0.01% 00:10:59.774 cpu : usr=1.70%, sys=9.00%, ctx=8312, majf=0, minf=2 00:10:59.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.774 issued rwts: total=4096,4216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.774 00:10:59.774 Run status group 0 (all jobs): 00:10:59.774 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:10:59.774 WRITE: bw=16.5MiB/s (17.3MB/s), 16.5MiB/s-16.5MiB/s (17.3MB/s-17.3MB/s), io=16.5MiB (17.3MB), run=1001-1001msec 00:10:59.774 00:10:59.774 Disk stats (read/write): 00:10:59.774 nvme0n1: ios=3634/3872, merge=0/0, ticks=464/375, in_queue=839, util=91.68% 00:10:59.774 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:59.774 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.774 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:10:59.774 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.774 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:59.774 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.775 rmmod nvme_tcp 00:10:59.775 rmmod nvme_fabrics 00:10:59.775 rmmod nvme_keyring 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 72054 ']' 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 72054 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 72054 ']' 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 72054 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72054 00:10:59.775 killing process with pid 72054 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72054' 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 72054 00:10:59.775 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 72054 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:00.033 00:11:00.033 real 0m5.738s 00:11:00.033 user 0m19.787s 00:11:00.033 sys 0m1.133s 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.033 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.033 ************************************ 00:11:00.033 END TEST nvmf_nmic 00:11:00.033 ************************************ 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.293 ************************************ 00:11:00.293 START TEST nvmf_fio_target 00:11:00.293 ************************************ 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:00.293 * Looking for test storage... 00:11:00.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:00.293 07:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:00.293 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:00.553 Cannot find device "nvmf_tgt_br" 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.553 Cannot find device "nvmf_tgt_br2" 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:00.553 Cannot find device "nvmf_tgt_br" 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:00.553 Cannot find device "nvmf_tgt_br2" 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:00.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:00.553 00:11:00.553 --- 10.0.0.2 ping statistics --- 00:11:00.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.553 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:00.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:00.553 00:11:00.553 --- 10.0.0.3 ping statistics --- 00:11:00.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.553 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:00.553 00:11:00.553 --- 10.0.0.1 ping statistics --- 00:11:00.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.553 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.553 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=72345 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 72345 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 72345 ']' 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.826 07:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.826 [2024-07-25 07:25:33.364606] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:00.826 [2024-07-25 07:25:33.365483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.826 [2024-07-25 07:25:33.520579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.088 [2024-07-25 07:25:33.625626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.088 [2024-07-25 07:25:33.625681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.088 [2024-07-25 07:25:33.625687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.088 [2024-07-25 07:25:33.625692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.088 [2024-07-25 07:25:33.625696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.088 [2024-07-25 07:25:33.625785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.088 [2024-07-25 07:25:33.625829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.088 [2024-07-25 07:25:33.626098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.088 [2024-07-25 07:25:33.626100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.656 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.656 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:01.656 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.656 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:01.656 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.656 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.656 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:01.915 [2024-07-25 07:25:34.517157] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.915 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.174 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:02.174 07:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.434 07:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:02.434 07:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.717 07:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:02.717 07:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.977 07:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:02.977 07:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:03.236 07:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.494 07:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:03.494 07:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.752 07:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:03.752 07:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.011 07:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:04.011 07:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:04.269 07:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.527 07:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:04.527 07:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.785 07:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:04.785 07:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.044 07:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.044 [2024-07-25 07:25:37.763981] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.303 07:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:05.561 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:05.819 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.819 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:05.819 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:11:05.819 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.819 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:11:05.819 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:11:05.819 07:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:11:08.356 07:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:08.356 07:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:08.356 07:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.356 07:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:11:08.356 07:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.356 07:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:11:08.356 07:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:08.356 [global] 00:11:08.356 thread=1 00:11:08.356 invalidate=1 00:11:08.356 rw=write 00:11:08.356 time_based=1 00:11:08.356 runtime=1 00:11:08.356 ioengine=libaio 00:11:08.356 direct=1 00:11:08.356 bs=4096 00:11:08.356 iodepth=1 00:11:08.356 norandommap=0 00:11:08.356 numjobs=1 00:11:08.356 00:11:08.356 verify_dump=1 00:11:08.356 verify_backlog=512 00:11:08.356 verify_state_save=0 00:11:08.356 do_verify=1 00:11:08.356 verify=crc32c-intel 00:11:08.356 [job0] 00:11:08.356 filename=/dev/nvme0n1 00:11:08.356 [job1] 00:11:08.356 filename=/dev/nvme0n2 00:11:08.356 [job2] 00:11:08.356 filename=/dev/nvme0n3 00:11:08.356 [job3] 00:11:08.356 filename=/dev/nvme0n4 00:11:08.356 Could not set queue depth (nvme0n1) 00:11:08.356 Could not set queue depth (nvme0n2) 00:11:08.356 Could not set queue depth (nvme0n3) 00:11:08.356 Could not set queue depth (nvme0n4) 00:11:08.356 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.356 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.356 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.356 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.356 fio-3.35 00:11:08.356 Starting 4 threads 00:11:09.291 00:11:09.291 job0: (groupid=0, jobs=1): err= 0: pid=72637: Thu Jul 25 07:25:41 2024 00:11:09.291 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:09.291 slat (nsec): min=9907, max=52062, avg=19811.01, stdev=6800.60 00:11:09.291 clat (usec): min=151, max=864, avg=303.69, stdev=59.47 00:11:09.291 lat (usec): min=163, max=877, avg=323.50, stdev=60.00 00:11:09.291 clat percentiles (usec): 00:11:09.291 | 1.00th=[ 165], 5.00th=[ 221], 10.00th=[ 247], 20.00th=[ 262], 00:11:09.291 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:11:09.291 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 404], 00:11:09.291 | 99.00th=[ 469], 99.50th=[ 515], 99.90th=[ 660], 99.95th=[ 865], 00:11:09.291 | 99.99th=[ 865] 00:11:09.291 write: IOPS=1820, BW=7281KiB/s (7455kB/s)(7288KiB/1001msec); 0 zone resets 00:11:09.291 slat (usec): min=15, max=103, avg=26.15, stdev= 9.32 00:11:09.291 clat (usec): min=106, max=431, avg=245.66, stdev=55.27 00:11:09.291 lat (usec): min=128, max=452, avg=271.81, stdev=54.73 00:11:09.291 clat percentiles (usec): 00:11:09.291 | 1.00th=[ 119], 5.00th=[ 135], 10.00th=[ 155], 20.00th=[ 204], 00:11:09.291 | 30.00th=[ 225], 40.00th=[ 241], 50.00th=[ 255], 60.00th=[ 269], 00:11:09.291 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 322], 00:11:09.291 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 412], 99.95th=[ 433], 00:11:09.291 | 99.99th=[ 433] 00:11:09.291 bw ( KiB/s): min= 8192, max= 8192, per=22.16%, avg=8192.00, stdev= 0.00, samples=1 00:11:09.291 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:09.291 lat (usec) : 250=30.08%, 500=69.59%, 750=0.30%, 1000=0.03% 00:11:09.291 cpu : usr=2.00%, sys=5.30%, ctx=3359, majf=0, minf=5 00:11:09.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.291 issued rwts: total=1536,1822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.291 job1: (groupid=0, jobs=1): err= 0: pid=72638: Thu Jul 25 07:25:41 2024 00:11:09.291 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:09.291 slat (nsec): min=8252, max=68892, avg=16812.62, stdev=6194.20 00:11:09.291 clat (usec): min=136, max=1707, avg=186.28, stdev=45.01 00:11:09.291 lat (usec): min=145, max=1720, avg=203.09, stdev=47.61 00:11:09.291 clat percentiles (usec): 00:11:09.291 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:09.291 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 188], 00:11:09.291 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 241], 00:11:09.291 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 644], 99.95th=[ 725], 00:11:09.291 | 99.99th=[ 1713] 00:11:09.291 write: IOPS=2819, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:11:09.291 slat (nsec): min=12466, max=92241, avg=23224.35, stdev=8233.65 00:11:09.291 clat (usec): min=96, max=293, avg=143.12, stdev=25.00 00:11:09.291 lat (usec): min=109, max=345, avg=166.34, stdev=30.84 00:11:09.291 clat percentiles (usec): 00:11:09.291 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 122], 00:11:09.291 | 30.00th=[ 126], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 147], 00:11:09.291 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:11:09.291 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 269], 99.95th=[ 273], 00:11:09.291 | 99.99th=[ 293] 00:11:09.292 bw ( KiB/s): min=11256, max=11256, per=30.45%, avg=11256.00, stdev= 0.00, samples=1 00:11:09.292 iops : min= 2814, max= 2814, avg=2814.00, stdev= 0.00, samples=1 00:11:09.292 lat (usec) : 100=0.02%, 250=98.38%, 500=1.54%, 750=0.04% 00:11:09.292 lat (msec) : 2=0.02% 00:11:09.292 cpu : usr=1.70%, sys=8.40%, ctx=5382, majf=0, minf=11 00:11:09.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.292 issued rwts: total=2560,2822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.292 job2: (groupid=0, jobs=1): err= 0: pid=72639: Thu Jul 25 07:25:41 2024 00:11:09.292 read: IOPS=1499, BW=5998KiB/s (6142kB/s)(6004KiB/1001msec) 00:11:09.292 slat (nsec): min=12973, max=50903, avg=20587.76, stdev=6950.59 00:11:09.292 clat (usec): min=159, max=2121, avg=341.20, stdev=90.63 00:11:09.292 lat (usec): min=177, max=2143, avg=361.79, stdev=94.96 00:11:09.292 clat percentiles (usec): 00:11:09.292 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:11:09.292 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 338], 00:11:09.292 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 465], 95.00th=[ 502], 00:11:09.292 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 2114], 00:11:09.292 | 99.99th=[ 2114] 00:11:09.292 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:09.292 slat (usec): min=13, max=115, avg=30.07, stdev= 8.82 00:11:09.292 clat (usec): min=131, max=463, avg=262.42, stdev=46.11 00:11:09.292 lat (usec): min=155, max=550, avg=292.49, stdev=47.15 00:11:09.292 clat percentiles (usec): 00:11:09.292 | 1.00th=[ 145], 5.00th=[ 196], 10.00th=[ 210], 20.00th=[ 225], 00:11:09.292 | 30.00th=[ 239], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 273], 00:11:09.292 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 343], 00:11:09.292 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 457], 99.95th=[ 465], 00:11:09.292 | 99.99th=[ 465] 00:11:09.292 bw ( KiB/s): min= 7056, max= 7056, per=19.09%, avg=7056.00, stdev= 0.00, samples=1 00:11:09.292 iops : min= 1764, max= 1764, avg=1764.00, stdev= 0.00, samples=1 00:11:09.292 lat (usec) : 250=19.79%, 500=77.71%, 750=2.47% 00:11:09.292 lat (msec) : 4=0.03% 00:11:09.292 cpu : usr=1.10%, sys=6.30%, ctx=3037, majf=0, minf=14 00:11:09.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.292 issued rwts: total=1501,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.292 job3: (groupid=0, jobs=1): err= 0: pid=72640: Thu Jul 25 07:25:41 2024 00:11:09.292 read: IOPS=2873, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec) 00:11:09.292 slat (nsec): min=8590, max=39030, avg=15771.76, stdev=4086.11 00:11:09.292 clat (usec): min=132, max=283, avg=163.64, stdev=15.82 00:11:09.292 lat (usec): min=145, max=297, avg=179.41, stdev=16.44 00:11:09.292 clat percentiles (usec): 00:11:09.292 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:11:09.292 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:11:09.292 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 194], 00:11:09.292 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 247], 99.95th=[ 269], 00:11:09.292 | 99.99th=[ 285] 00:11:09.292 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:09.292 slat (usec): min=12, max=198, avg=22.83, stdev= 6.28 00:11:09.292 clat (usec): min=96, max=2660, avg=131.10, stdev=66.62 00:11:09.292 lat (usec): min=115, max=2700, avg=153.94, stdev=67.30 00:11:09.292 clat percentiles (usec): 00:11:09.292 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 119], 00:11:09.292 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:11:09.292 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 157], 00:11:09.292 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 445], 99.95th=[ 2573], 00:11:09.292 | 99.99th=[ 2671] 00:11:09.292 bw ( KiB/s): min=12288, max=12288, per=33.24%, avg=12288.00, stdev= 0.00, samples=1 00:11:09.292 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:09.292 lat (usec) : 100=0.08%, 250=99.82%, 500=0.05%, 1000=0.02% 00:11:09.292 lat (msec) : 4=0.03% 00:11:09.292 cpu : usr=2.10%, sys=8.60%, ctx=5949, majf=0, minf=13 00:11:09.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.292 issued rwts: total=2876,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.292 00:11:09.292 Run status group 0 (all jobs): 00:11:09.292 READ: bw=33.1MiB/s (34.7MB/s), 5998KiB/s-11.2MiB/s (6142kB/s-11.8MB/s), io=33.1MiB (34.7MB), run=1001-1001msec 00:11:09.292 WRITE: bw=36.1MiB/s (37.9MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=36.1MiB (37.9MB), run=1001-1001msec 00:11:09.292 00:11:09.292 Disk stats (read/write): 00:11:09.292 nvme0n1: ios=1333/1536, merge=0/0, ticks=428/403, in_queue=831, util=88.08% 00:11:09.292 nvme0n2: ios=2097/2497, merge=0/0, ticks=423/388, in_queue=811, util=87.80% 00:11:09.292 nvme0n3: ios=1078/1536, merge=0/0, ticks=391/420, in_queue=811, util=89.20% 00:11:09.292 nvme0n4: ios=2524/2560, merge=0/0, ticks=418/350, in_queue=768, util=89.56% 00:11:09.292 07:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:09.292 [global] 00:11:09.292 thread=1 00:11:09.292 invalidate=1 00:11:09.292 rw=randwrite 00:11:09.292 time_based=1 00:11:09.292 runtime=1 00:11:09.292 ioengine=libaio 00:11:09.292 direct=1 00:11:09.292 bs=4096 00:11:09.292 iodepth=1 00:11:09.292 norandommap=0 00:11:09.292 numjobs=1 00:11:09.292 00:11:09.292 verify_dump=1 00:11:09.292 verify_backlog=512 00:11:09.292 verify_state_save=0 00:11:09.292 do_verify=1 00:11:09.292 verify=crc32c-intel 00:11:09.292 [job0] 00:11:09.292 filename=/dev/nvme0n1 00:11:09.292 [job1] 00:11:09.292 filename=/dev/nvme0n2 00:11:09.292 [job2] 00:11:09.292 filename=/dev/nvme0n3 00:11:09.292 [job3] 00:11:09.292 filename=/dev/nvme0n4 00:11:09.551 Could not set queue depth (nvme0n1) 00:11:09.551 Could not set queue depth (nvme0n2) 00:11:09.551 Could not set queue depth (nvme0n3) 00:11:09.551 Could not set queue depth (nvme0n4) 00:11:09.551 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.551 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.551 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.551 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.551 fio-3.35 00:11:09.551 Starting 4 threads 00:11:10.927 00:11:10.927 job0: (groupid=0, jobs=1): err= 0: pid=72700: Thu Jul 25 07:25:43 2024 00:11:10.927 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:10.927 slat (usec): min=7, max=2063, avg=16.22, stdev=52.63 00:11:10.927 clat (usec): min=161, max=4465, avg=329.74, stdev=205.17 00:11:10.927 lat (usec): min=170, max=4480, avg=345.96, stdev=212.27 00:11:10.927 clat percentiles (usec): 00:11:10.927 | 1.00th=[ 215], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:11:10.927 | 30.00th=[ 285], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:11:10.927 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 400], 00:11:10.927 | 99.00th=[ 469], 99.50th=[ 881], 99.90th=[ 4015], 99.95th=[ 4490], 00:11:10.927 | 99.99th=[ 4490] 00:11:10.927 write: IOPS=1739, BW=6957KiB/s (7124kB/s)(6964KiB/1001msec); 0 zone resets 00:11:10.927 slat (usec): min=7, max=127, avg=24.25, stdev=11.17 00:11:10.927 clat (usec): min=104, max=623, avg=240.87, stdev=42.58 00:11:10.927 lat (usec): min=119, max=660, avg=265.12, stdev=47.70 00:11:10.927 clat percentiles (usec): 00:11:10.927 | 1.00th=[ 127], 5.00th=[ 165], 10.00th=[ 184], 20.00th=[ 200], 00:11:10.927 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 258], 00:11:10.927 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:11:10.927 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 486], 99.95th=[ 627], 00:11:10.927 | 99.99th=[ 627] 00:11:10.927 bw ( KiB/s): min= 8192, max= 8192, per=24.22%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.927 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.927 lat (usec) : 250=35.06%, 500=64.57%, 750=0.12%, 1000=0.03% 00:11:10.927 lat (msec) : 2=0.06%, 4=0.09%, 10=0.06% 00:11:10.927 cpu : usr=1.10%, sys=5.70%, ctx=3284, majf=0, minf=9 00:11:10.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.927 issued rwts: total=1536,1741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.927 job1: (groupid=0, jobs=1): err= 0: pid=72701: Thu Jul 25 07:25:43 2024 00:11:10.927 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:10.927 slat (nsec): min=7097, max=45664, avg=11165.48, stdev=3792.06 00:11:10.927 clat (usec): min=117, max=394, avg=166.49, stdev=46.87 00:11:10.927 lat (usec): min=126, max=403, avg=177.65, stdev=47.22 00:11:10.927 clat percentiles (usec): 00:11:10.927 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:11:10.927 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:11:10.927 | 70.00th=[ 155], 80.00th=[ 212], 90.00th=[ 247], 95.00th=[ 258], 00:11:10.927 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 367], 99.95th=[ 388], 00:11:10.927 | 99.99th=[ 396] 00:11:10.927 write: IOPS=3129, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:11:10.927 slat (usec): min=9, max=150, avg=16.22, stdev= 8.95 00:11:10.927 clat (usec): min=85, max=359, avg=125.97, stdev=38.72 00:11:10.927 lat (usec): min=98, max=383, avg=142.19, stdev=40.33 00:11:10.927 clat percentiles (usec): 00:11:10.927 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 103], 00:11:10.927 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 115], 00:11:10.927 | 70.00th=[ 120], 80.00th=[ 135], 90.00th=[ 190], 95.00th=[ 217], 00:11:10.927 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 355], 00:11:10.927 | 99.99th=[ 359] 00:11:10.927 bw ( KiB/s): min=16160, max=16160, per=47.77%, avg=16160.00, stdev= 0.00, samples=1 00:11:10.927 iops : min= 4040, max= 4040, avg=4040.00, stdev= 0.00, samples=1 00:11:10.927 lat (usec) : 100=5.93%, 250=89.57%, 500=4.50% 00:11:10.927 cpu : usr=1.60%, sys=6.40%, ctx=6206, majf=0, minf=13 00:11:10.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.927 issued rwts: total=3072,3133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.927 job2: (groupid=0, jobs=1): err= 0: pid=72702: Thu Jul 25 07:25:43 2024 00:11:10.927 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:10.927 slat (nsec): min=8237, max=96052, avg=20456.88, stdev=11556.23 00:11:10.927 clat (usec): min=131, max=508, avg=309.54, stdev=50.97 00:11:10.927 lat (usec): min=141, max=550, avg=330.00, stdev=55.88 00:11:10.927 clat percentiles (usec): 00:11:10.927 | 1.00th=[ 147], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 269], 00:11:10.927 | 30.00th=[ 281], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 330], 00:11:10.927 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 375], 00:11:10.927 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 506], 99.95th=[ 510], 00:11:10.927 | 99.99th=[ 510] 00:11:10.927 write: IOPS=1867, BW=7469KiB/s (7648kB/s)(7476KiB/1001msec); 0 zone resets 00:11:10.927 slat (usec): min=7, max=130, avg=25.04, stdev=12.65 00:11:10.927 clat (usec): min=101, max=378, avg=234.47, stdev=41.74 00:11:10.927 lat (usec): min=117, max=443, avg=259.51, stdev=49.00 00:11:10.927 clat percentiles (usec): 00:11:10.927 | 1.00th=[ 114], 5.00th=[ 135], 10.00th=[ 186], 20.00th=[ 204], 00:11:10.927 | 30.00th=[ 215], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 253], 00:11:10.927 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:11:10.927 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 375], 99.95th=[ 379], 00:11:10.927 | 99.99th=[ 379] 00:11:10.927 bw ( KiB/s): min= 8175, max= 8175, per=24.17%, avg=8175.00, stdev= 0.00, samples=1 00:11:10.927 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:11:10.927 lat (usec) : 250=34.16%, 500=65.79%, 750=0.06% 00:11:10.927 cpu : usr=1.20%, sys=6.20%, ctx=3407, majf=0, minf=12 00:11:10.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.927 issued rwts: total=1536,1869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.927 job3: (groupid=0, jobs=1): err= 0: pid=72703: Thu Jul 25 07:25:43 2024 00:11:10.927 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:10.927 slat (nsec): min=7043, max=47335, avg=13256.87, stdev=4387.68 00:11:10.927 clat (usec): min=225, max=4453, avg=335.36, stdev=183.69 00:11:10.927 lat (usec): min=235, max=4470, avg=348.62, stdev=184.61 00:11:10.927 clat percentiles (usec): 00:11:10.927 | 1.00th=[ 233], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 269], 00:11:10.928 | 30.00th=[ 289], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:11:10.928 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 420], 00:11:10.928 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 3523], 99.95th=[ 4424], 00:11:10.928 | 99.99th=[ 4424] 00:11:10.928 write: IOPS=1720, BW=6881KiB/s (7046kB/s)(6888KiB/1001msec); 0 zone resets 00:11:10.928 slat (usec): min=7, max=123, avg=20.67, stdev=10.32 00:11:10.928 clat (usec): min=87, max=434, avg=245.47, stdev=41.09 00:11:10.928 lat (usec): min=103, max=458, avg=266.15, stdev=44.66 00:11:10.928 clat percentiles (usec): 00:11:10.928 | 1.00th=[ 130], 5.00th=[ 184], 10.00th=[ 196], 20.00th=[ 208], 00:11:10.928 | 30.00th=[ 223], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 265], 00:11:10.928 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:11:10.928 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 420], 99.95th=[ 437], 00:11:10.928 | 99.99th=[ 437] 00:11:10.928 bw ( KiB/s): min= 8192, max= 8192, per=24.22%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.928 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.928 lat (usec) : 100=0.03%, 250=25.94%, 500=73.66%, 750=0.18% 00:11:10.928 lat (msec) : 2=0.03%, 4=0.12%, 10=0.03% 00:11:10.928 cpu : usr=1.40%, sys=4.50%, ctx=3259, majf=0, minf=11 00:11:10.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.928 issued rwts: total=1536,1722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.928 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.928 00:11:10.928 Run status group 0 (all jobs): 00:11:10.928 READ: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:11:10.928 WRITE: bw=33.0MiB/s (34.6MB/s), 6881KiB/s-12.2MiB/s (7046kB/s-12.8MB/s), io=33.1MiB (34.7MB), run=1001-1001msec 00:11:10.928 00:11:10.928 Disk stats (read/write): 00:11:10.928 nvme0n1: ios=1263/1536, merge=0/0, ticks=429/377, in_queue=806, util=89.07% 00:11:10.928 nvme0n2: ios=2690/3072, merge=0/0, ticks=431/404, in_queue=835, util=89.30% 00:11:10.928 nvme0n3: ios=1367/1536, merge=0/0, ticks=432/386, in_queue=818, util=89.88% 00:11:10.928 nvme0n4: ios=1263/1536, merge=0/0, ticks=418/358, in_queue=776, util=90.05% 00:11:10.928 07:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:10.928 [global] 00:11:10.928 thread=1 00:11:10.928 invalidate=1 00:11:10.928 rw=write 00:11:10.928 time_based=1 00:11:10.928 runtime=1 00:11:10.928 ioengine=libaio 00:11:10.928 direct=1 00:11:10.928 bs=4096 00:11:10.928 iodepth=128 00:11:10.928 norandommap=0 00:11:10.928 numjobs=1 00:11:10.928 00:11:10.928 verify_dump=1 00:11:10.928 verify_backlog=512 00:11:10.928 verify_state_save=0 00:11:10.928 do_verify=1 00:11:10.928 verify=crc32c-intel 00:11:10.928 [job0] 00:11:10.928 filename=/dev/nvme0n1 00:11:10.928 [job1] 00:11:10.928 filename=/dev/nvme0n2 00:11:10.928 [job2] 00:11:10.928 filename=/dev/nvme0n3 00:11:10.928 [job3] 00:11:10.928 filename=/dev/nvme0n4 00:11:10.928 Could not set queue depth (nvme0n1) 00:11:10.928 Could not set queue depth (nvme0n2) 00:11:10.928 Could not set queue depth (nvme0n3) 00:11:10.928 Could not set queue depth (nvme0n4) 00:11:10.928 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.928 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.928 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.928 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.928 fio-3.35 00:11:10.928 Starting 4 threads 00:11:12.304 00:11:12.304 job0: (groupid=0, jobs=1): err= 0: pid=72757: Thu Jul 25 07:25:44 2024 00:11:12.304 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:11:12.304 slat (usec): min=3, max=7488, avg=182.39, stdev=689.82 00:11:12.304 clat (usec): min=17828, max=30117, avg=23794.94, stdev=2199.75 00:11:12.304 lat (usec): min=18282, max=30137, avg=23977.33, stdev=2184.38 00:11:12.304 clat percentiles (usec): 00:11:12.304 | 1.00th=[19268], 5.00th=[20055], 10.00th=[20841], 20.00th=[21890], 00:11:12.304 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[24249], 00:11:12.304 | 70.00th=[24511], 80.00th=[25560], 90.00th=[26870], 95.00th=[27657], 00:11:12.304 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:11:12.304 | 99.99th=[30016] 00:11:12.304 write: IOPS=2830, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1004msec); 0 zone resets 00:11:12.304 slat (usec): min=7, max=6622, avg=177.25, stdev=679.31 00:11:12.304 clat (usec): min=3138, max=29871, avg=23095.50, stdev=2938.28 00:11:12.304 lat (usec): min=3168, max=29902, avg=23272.75, stdev=2895.50 00:11:12.304 clat percentiles (usec): 00:11:12.304 | 1.00th=[ 9896], 5.00th=[18220], 10.00th=[19530], 20.00th=[21627], 00:11:12.304 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:11:12.304 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25822], 95.00th=[27132], 00:11:12.304 | 99.00th=[28181], 99.50th=[28443], 99.90th=[29754], 99.95th=[29754], 00:11:12.304 | 99.99th=[29754] 00:11:12.305 bw ( KiB/s): min= 9660, max=12015, per=17.09%, avg=10837.50, stdev=1665.24, samples=2 00:11:12.305 iops : min= 2415, max= 3003, avg=2709.00, stdev=415.78, samples=2 00:11:12.305 lat (msec) : 4=0.09%, 10=0.50%, 20=7.66%, 50=91.74% 00:11:12.305 cpu : usr=3.19%, sys=11.17%, ctx=900, majf=0, minf=9 00:11:12.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:12.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.305 issued rwts: total=2560,2842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.305 job1: (groupid=0, jobs=1): err= 0: pid=72758: Thu Jul 25 07:25:44 2024 00:11:12.305 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:12.305 slat (usec): min=4, max=7144, avg=182.55, stdev=714.56 00:11:12.305 clat (usec): min=17108, max=30907, avg=23846.98, stdev=2131.42 00:11:12.305 lat (usec): min=17131, max=30927, avg=24029.53, stdev=2084.36 00:11:12.305 clat percentiles (usec): 00:11:12.305 | 1.00th=[18744], 5.00th=[20317], 10.00th=[21365], 20.00th=[22152], 00:11:12.305 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23987], 60.00th=[24249], 00:11:12.305 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26346], 95.00th=[27657], 00:11:12.305 | 99.00th=[29754], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:11:12.305 | 99.99th=[30802] 00:11:12.305 write: IOPS=2867, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:11:12.305 slat (usec): min=7, max=6197, avg=175.93, stdev=659.10 00:11:12.305 clat (usec): min=804, max=30444, avg=22628.30, stdev=3733.50 00:11:12.305 lat (usec): min=830, max=30505, avg=22804.23, stdev=3708.14 00:11:12.305 clat percentiles (usec): 00:11:12.305 | 1.00th=[ 4359], 5.00th=[17695], 10.00th=[19006], 20.00th=[21103], 00:11:12.305 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:11:12.305 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26084], 00:11:12.305 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30278], 99.95th=[30540], 00:11:12.305 | 99.99th=[30540] 00:11:12.305 bw ( KiB/s): min=12191, max=12191, per=19.23%, avg=12191.00, stdev= 0.00, samples=1 00:11:12.305 iops : min= 3047, max= 3047, avg=3047.00, stdev= 0.00, samples=1 00:11:12.305 lat (usec) : 1000=0.11% 00:11:12.305 lat (msec) : 2=0.24%, 10=1.18%, 20=7.44%, 50=91.03% 00:11:12.305 cpu : usr=2.60%, sys=12.20%, ctx=879, majf=0, minf=17 00:11:12.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:12.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.305 issued rwts: total=2560,2870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.305 job2: (groupid=0, jobs=1): err= 0: pid=72759: Thu Jul 25 07:25:44 2024 00:11:12.305 read: IOPS=4689, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1003msec) 00:11:12.305 slat (usec): min=4, max=3186, avg=100.76, stdev=479.46 00:11:12.305 clat (usec): min=345, max=16317, avg=13081.77, stdev=1288.78 00:11:12.305 lat (usec): min=2980, max=16347, avg=13182.53, stdev=1219.71 00:11:12.305 clat percentiles (usec): 00:11:12.305 | 1.00th=[ 6718], 5.00th=[11076], 10.00th=[12125], 20.00th=[12780], 00:11:12.305 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:11:12.305 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13960], 95.00th=[14353], 00:11:12.305 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15664], 99.95th=[15664], 00:11:12.305 | 99.99th=[16319] 00:11:12.305 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:11:12.305 slat (usec): min=9, max=3208, avg=97.05, stdev=419.43 00:11:12.305 clat (usec): min=9388, max=16070, avg=12720.63, stdev=929.71 00:11:12.305 lat (usec): min=9618, max=16104, avg=12817.68, stdev=864.62 00:11:12.305 clat percentiles (usec): 00:11:12.305 | 1.00th=[10159], 5.00th=[10683], 10.00th=[11469], 20.00th=[12387], 00:11:12.305 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:11:12.305 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14222], 00:11:12.305 | 99.00th=[15139], 99.50th=[15533], 99.90th=[15795], 99.95th=[16057], 00:11:12.305 | 99.99th=[16057] 00:11:12.305 bw ( KiB/s): min=20183, max=20480, per=32.07%, avg=20331.50, stdev=210.01, samples=2 00:11:12.305 iops : min= 5045, max= 5120, avg=5082.50, stdev=53.03, samples=2 00:11:12.305 lat (usec) : 500=0.01% 00:11:12.305 lat (msec) : 4=0.33%, 10=0.78%, 20=98.88% 00:11:12.305 cpu : usr=3.39%, sys=11.98%, ctx=469, majf=0, minf=11 00:11:12.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:12.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.305 issued rwts: total=4704,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.305 job3: (groupid=0, jobs=1): err= 0: pid=72760: Thu Jul 25 07:25:44 2024 00:11:12.305 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:11:12.305 slat (usec): min=5, max=7012, avg=103.97, stdev=537.77 00:11:12.305 clat (usec): min=7631, max=20765, avg=13297.16, stdev=1690.44 00:11:12.305 lat (usec): min=8183, max=20777, avg=13401.14, stdev=1737.08 00:11:12.305 clat percentiles (usec): 00:11:12.305 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[11469], 20.00th=[12649], 00:11:12.305 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:11:12.305 | 70.00th=[13435], 80.00th=[13960], 90.00th=[15270], 95.00th=[16909], 00:11:12.305 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19792], 99.95th=[20841], 00:11:12.305 | 99.99th=[20841] 00:11:12.305 write: IOPS=5066, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1003msec); 0 zone resets 00:11:12.305 slat (usec): min=9, max=5187, avg=94.39, stdev=374.40 00:11:12.305 clat (usec): min=386, max=20269, avg=12877.07, stdev=1789.79 00:11:12.305 lat (usec): min=4090, max=20292, avg=12971.46, stdev=1795.21 00:11:12.305 clat percentiles (usec): 00:11:12.305 | 1.00th=[ 6063], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[12256], 00:11:12.305 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:11:12.305 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[15795], 00:11:12.305 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:11:12.305 | 99.99th=[20317] 00:11:12.305 bw ( KiB/s): min=19113, max=20521, per=31.26%, avg=19817.00, stdev=995.61, samples=2 00:11:12.305 iops : min= 4778, max= 5130, avg=4954.00, stdev=248.90, samples=2 00:11:12.305 lat (usec) : 500=0.01% 00:11:12.305 lat (msec) : 10=4.66%, 20=95.27%, 50=0.05% 00:11:12.305 cpu : usr=4.59%, sys=17.37%, ctx=539, majf=0, minf=13 00:11:12.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:12.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.305 issued rwts: total=4608,5082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.305 00:11:12.305 Run status group 0 (all jobs): 00:11:12.305 READ: bw=56.2MiB/s (58.9MB/s), 9.96MiB/s-18.3MiB/s (10.4MB/s-19.2MB/s), io=56.4MiB (59.1MB), run=1001-1004msec 00:11:12.305 WRITE: bw=61.9MiB/s (64.9MB/s), 11.1MiB/s-19.9MiB/s (11.6MB/s-20.9MB/s), io=62.2MiB (65.2MB), run=1001-1004msec 00:11:12.305 00:11:12.305 Disk stats (read/write): 00:11:12.305 nvme0n1: ios=2101/2560, merge=0/0, ticks=11414/12593, in_queue=24007, util=87.37% 00:11:12.305 nvme0n2: ios=2116/2560, merge=0/0, ticks=11420/12433, in_queue=23853, util=87.23% 00:11:12.305 nvme0n3: ios=4102/4358, merge=0/0, ticks=12759/12125, in_queue=24884, util=89.36% 00:11:12.305 nvme0n4: ios=4096/4153, merge=0/0, ticks=25535/23518, in_queue=49053, util=89.60% 00:11:12.305 07:25:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:12.305 [global] 00:11:12.305 thread=1 00:11:12.305 invalidate=1 00:11:12.305 rw=randwrite 00:11:12.305 time_based=1 00:11:12.305 runtime=1 00:11:12.305 ioengine=libaio 00:11:12.305 direct=1 00:11:12.305 bs=4096 00:11:12.305 iodepth=128 00:11:12.305 norandommap=0 00:11:12.305 numjobs=1 00:11:12.305 00:11:12.305 verify_dump=1 00:11:12.305 verify_backlog=512 00:11:12.305 verify_state_save=0 00:11:12.305 do_verify=1 00:11:12.305 verify=crc32c-intel 00:11:12.305 [job0] 00:11:12.305 filename=/dev/nvme0n1 00:11:12.305 [job1] 00:11:12.305 filename=/dev/nvme0n2 00:11:12.305 [job2] 00:11:12.305 filename=/dev/nvme0n3 00:11:12.305 [job3] 00:11:12.305 filename=/dev/nvme0n4 00:11:12.305 Could not set queue depth (nvme0n1) 00:11:12.305 Could not set queue depth (nvme0n2) 00:11:12.305 Could not set queue depth (nvme0n3) 00:11:12.305 Could not set queue depth (nvme0n4) 00:11:12.305 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.305 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.305 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.305 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.305 fio-3.35 00:11:12.305 Starting 4 threads 00:11:13.704 00:11:13.704 job0: (groupid=0, jobs=1): err= 0: pid=72813: Thu Jul 25 07:25:46 2024 00:11:13.704 read: IOPS=5940, BW=23.2MiB/s (24.3MB/s)(23.2MiB/1002msec) 00:11:13.704 slat (usec): min=4, max=5836, avg=79.08, stdev=292.41 00:11:13.704 clat (usec): min=885, max=26035, avg=10463.70, stdev=3048.41 00:11:13.704 lat (usec): min=903, max=26055, avg=10542.78, stdev=3066.54 00:11:13.704 clat percentiles (usec): 00:11:13.704 | 1.00th=[ 5866], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:11:13.704 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:11:13.704 | 70.00th=[10290], 80.00th=[10683], 90.00th=[12518], 95.00th=[18744], 00:11:13.704 | 99.00th=[21365], 99.50th=[23987], 99.90th=[26084], 99.95th=[26084], 00:11:13.704 | 99.99th=[26084] 00:11:13.704 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:11:13.704 slat (usec): min=7, max=4735, avg=76.68, stdev=256.80 00:11:13.704 clat (usec): min=7277, max=26975, avg=10455.98, stdev=3199.37 00:11:13.704 lat (usec): min=7308, max=27000, avg=10532.66, stdev=3220.67 00:11:13.704 clat percentiles (usec): 00:11:13.704 | 1.00th=[ 7767], 5.00th=[ 8291], 10.00th=[ 8455], 20.00th=[ 8586], 00:11:13.704 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:11:13.704 | 70.00th=[10028], 80.00th=[10945], 90.00th=[13960], 95.00th=[18744], 00:11:13.704 | 99.00th=[23987], 99.50th=[25822], 99.90th=[26608], 99.95th=[26608], 00:11:13.704 | 99.99th=[26870] 00:11:13.704 bw ( KiB/s): min=21178, max=28016, per=44.74%, avg=24597.00, stdev=4835.20, samples=2 00:11:13.704 iops : min= 5294, max= 7004, avg=6149.00, stdev=1209.15, samples=2 00:11:13.704 lat (usec) : 1000=0.04% 00:11:13.704 lat (msec) : 2=0.11%, 4=0.26%, 10=64.72%, 20=32.48%, 50=2.39% 00:11:13.704 cpu : usr=6.39%, sys=23.48%, ctx=1030, majf=0, minf=11 00:11:13.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:13.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.704 issued rwts: total=5952,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.704 job1: (groupid=0, jobs=1): err= 0: pid=72814: Thu Jul 25 07:25:46 2024 00:11:13.704 read: IOPS=2131, BW=8528KiB/s (8732kB/s)(8664KiB/1016msec) 00:11:13.704 slat (usec): min=8, max=12463, avg=227.90, stdev=1062.80 00:11:13.704 clat (usec): min=14845, max=52236, avg=29482.07, stdev=7408.05 00:11:13.704 lat (usec): min=15591, max=52254, avg=29709.97, stdev=7407.48 00:11:13.704 clat percentiles (usec): 00:11:13.704 | 1.00th=[17957], 5.00th=[19006], 10.00th=[20317], 20.00th=[21365], 00:11:13.704 | 30.00th=[23462], 40.00th=[27395], 50.00th=[30278], 60.00th=[32375], 00:11:13.704 | 70.00th=[33424], 80.00th=[34866], 90.00th=[38536], 95.00th=[42206], 00:11:13.704 | 99.00th=[48497], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:11:13.704 | 99.99th=[52167] 00:11:13.704 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec); 0 zone resets 00:11:13.704 slat (usec): min=11, max=6808, avg=187.61, stdev=710.40 00:11:13.704 clat (usec): min=14612, max=51060, avg=25048.68, stdev=5068.59 00:11:13.704 lat (usec): min=14832, max=51091, avg=25236.29, stdev=5083.31 00:11:13.704 clat percentiles (usec): 00:11:13.704 | 1.00th=[15270], 5.00th=[17433], 10.00th=[18220], 20.00th=[21103], 00:11:13.704 | 30.00th=[22676], 40.00th=[22938], 50.00th=[25035], 60.00th=[26346], 00:11:13.704 | 70.00th=[27657], 80.00th=[29492], 90.00th=[31589], 95.00th=[33817], 00:11:13.704 | 99.00th=[36439], 99.50th=[38011], 99.90th=[46400], 99.95th=[51119], 00:11:13.704 | 99.99th=[51119] 00:11:13.704 bw ( KiB/s): min= 8136, max=12272, per=18.56%, avg=10204.00, stdev=2924.59, samples=2 00:11:13.704 iops : min= 2034, max= 3068, avg=2551.00, stdev=731.15, samples=2 00:11:13.704 lat (msec) : 20=14.13%, 50=85.70%, 100=0.17% 00:11:13.704 cpu : usr=2.66%, sys=11.43%, ctx=468, majf=0, minf=15 00:11:13.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:13.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.704 issued rwts: total=2166,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.704 job2: (groupid=0, jobs=1): err= 0: pid=72815: Thu Jul 25 07:25:46 2024 00:11:13.704 read: IOPS=2344, BW=9377KiB/s (9602kB/s)(9480KiB/1011msec) 00:11:13.704 slat (usec): min=7, max=14780, avg=233.14, stdev=1199.35 00:11:13.704 clat (usec): min=8271, max=48610, avg=29266.07, stdev=8003.03 00:11:13.704 lat (usec): min=17105, max=48657, avg=29499.21, stdev=7987.47 00:11:13.704 clat percentiles (usec): 00:11:13.704 | 1.00th=[17433], 5.00th=[18482], 10.00th=[19006], 20.00th=[21365], 00:11:13.704 | 30.00th=[23725], 40.00th=[25560], 50.00th=[28181], 60.00th=[31327], 00:11:13.704 | 70.00th=[33817], 80.00th=[35914], 90.00th=[40109], 95.00th=[45876], 00:11:13.704 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:11:13.704 | 99.99th=[48497] 00:11:13.704 write: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec); 0 zone resets 00:11:13.704 slat (usec): min=11, max=9679, avg=164.55, stdev=765.56 00:11:13.704 clat (usec): min=10178, max=39529, avg=22644.74, stdev=5348.66 00:11:13.704 lat (usec): min=14792, max=39565, avg=22809.29, stdev=5323.26 00:11:13.704 clat percentiles (usec): 00:11:13.705 | 1.00th=[14746], 5.00th=[16057], 10.00th=[17171], 20.00th=[18744], 00:11:13.705 | 30.00th=[19792], 40.00th=[20317], 50.00th=[21365], 60.00th=[22676], 00:11:13.705 | 70.00th=[22938], 80.00th=[25035], 90.00th=[31851], 95.00th=[33817], 00:11:13.705 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:11:13.705 | 99.99th=[39584] 00:11:13.705 bw ( KiB/s): min= 8192, max=12312, per=18.65%, avg=10252.00, stdev=2913.28, samples=2 00:11:13.705 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:11:13.705 lat (msec) : 10=0.02%, 20=24.99%, 50=74.99% 00:11:13.705 cpu : usr=3.56%, sys=10.69%, ctx=189, majf=0, minf=11 00:11:13.705 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:11:13.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.705 issued rwts: total=2370,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.705 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.705 job3: (groupid=0, jobs=1): err= 0: pid=72816: Thu Jul 25 07:25:46 2024 00:11:13.705 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:11:13.705 slat (usec): min=4, max=15877, avg=221.31, stdev=1036.64 00:11:13.705 clat (usec): min=10301, max=46997, avg=28683.84, stdev=11040.85 00:11:13.705 lat (usec): min=10681, max=47029, avg=28905.15, stdev=11085.02 00:11:13.705 clat percentiles (usec): 00:11:13.705 | 1.00th=[11338], 5.00th=[12780], 10.00th=[13173], 20.00th=[14091], 00:11:13.705 | 30.00th=[20055], 40.00th=[23462], 50.00th=[30278], 60.00th=[33817], 00:11:13.705 | 70.00th=[37487], 80.00th=[40109], 90.00th=[42206], 95.00th=[43779], 00:11:13.705 | 99.00th=[46400], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:11:13.705 | 99.99th=[46924] 00:11:13.705 write: IOPS=2663, BW=10.4MiB/s (10.9MB/s)(10.6MiB/1014msec); 0 zone resets 00:11:13.705 slat (usec): min=8, max=5705, avg=149.79, stdev=574.06 00:11:13.705 clat (usec): min=10092, max=38680, avg=20199.98, stdev=6640.26 00:11:13.705 lat (usec): min=10122, max=41672, avg=20349.77, stdev=6672.29 00:11:13.705 clat percentiles (usec): 00:11:13.705 | 1.00th=[11469], 5.00th=[11731], 10.00th=[11994], 20.00th=[13698], 00:11:13.705 | 30.00th=[16057], 40.00th=[18220], 50.00th=[19268], 60.00th=[21103], 00:11:13.705 | 70.00th=[22676], 80.00th=[25035], 90.00th=[30016], 95.00th=[33817], 00:11:13.705 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:11:13.705 | 99.99th=[38536] 00:11:13.705 bw ( KiB/s): min= 8016, max=12625, per=18.77%, avg=10320.50, stdev=3259.06, samples=2 00:11:13.705 iops : min= 2004, max= 3156, avg=2580.00, stdev=814.59, samples=2 00:11:13.705 lat (msec) : 20=42.96%, 50=57.04% 00:11:13.705 cpu : usr=2.96%, sys=11.65%, ctx=632, majf=0, minf=13 00:11:13.705 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:13.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.705 issued rwts: total=2560,2701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.705 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.705 00:11:13.705 Run status group 0 (all jobs): 00:11:13.705 READ: bw=50.2MiB/s (52.6MB/s), 8528KiB/s-23.2MiB/s (8732kB/s-24.3MB/s), io=51.0MiB (53.4MB), run=1002-1016msec 00:11:13.705 WRITE: bw=53.7MiB/s (56.3MB/s), 9.84MiB/s-24.0MiB/s (10.3MB/s-25.1MB/s), io=54.6MiB (57.2MB), run=1002-1016msec 00:11:13.705 00:11:13.705 Disk stats (read/write): 00:11:13.705 nvme0n1: ios=5170/5320, merge=0/0, ticks=12494/10669, in_queue=23163, util=90.17% 00:11:13.705 nvme0n2: ios=2097/2119, merge=0/0, ticks=13770/11003, in_queue=24773, util=90.22% 00:11:13.705 nvme0n3: ios=2103/2433, merge=0/0, ticks=14028/10982, in_queue=25010, util=91.45% 00:11:13.705 nvme0n4: ios=2291/2560, merge=0/0, ticks=14036/10152, in_queue=24188, util=91.11% 00:11:13.705 07:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:13.705 07:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=72829 00:11:13.705 07:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:13.705 07:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:13.705 [global] 00:11:13.705 thread=1 00:11:13.705 invalidate=1 00:11:13.705 rw=read 00:11:13.705 time_based=1 00:11:13.705 runtime=10 00:11:13.705 ioengine=libaio 00:11:13.705 direct=1 00:11:13.705 bs=4096 00:11:13.705 iodepth=1 00:11:13.705 norandommap=1 00:11:13.705 numjobs=1 00:11:13.705 00:11:13.705 [job0] 00:11:13.705 filename=/dev/nvme0n1 00:11:13.705 [job1] 00:11:13.705 filename=/dev/nvme0n2 00:11:13.705 [job2] 00:11:13.705 filename=/dev/nvme0n3 00:11:13.705 [job3] 00:11:13.705 filename=/dev/nvme0n4 00:11:13.705 Could not set queue depth (nvme0n1) 00:11:13.705 Could not set queue depth (nvme0n2) 00:11:13.705 Could not set queue depth (nvme0n3) 00:11:13.705 Could not set queue depth (nvme0n4) 00:11:13.705 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.705 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.705 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.705 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.705 fio-3.35 00:11:13.705 Starting 4 threads 00:11:16.995 07:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:16.995 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=61448192, buflen=4096 00:11:16.995 fio: pid=72883, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:16.995 07:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:16.995 fio: pid=72882, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:16.995 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=51466240, buflen=4096 00:11:16.995 07:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.995 07:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:17.254 fio: pid=72880, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:17.254 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=56807424, buflen=4096 00:11:17.254 07:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.254 07:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:17.513 fio: pid=72881, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:17.513 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=27734016, buflen=4096 00:11:17.513 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.513 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:17.513 00:11:17.514 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72880: Thu Jul 25 07:25:50 2024 00:11:17.514 read: IOPS=4223, BW=16.5MiB/s (17.3MB/s)(54.2MiB/3284msec) 00:11:17.514 slat (usec): min=5, max=13487, avg=11.34, stdev=191.34 00:11:17.514 clat (usec): min=104, max=1799, avg=224.60, stdev=39.87 00:11:17.514 lat (usec): min=114, max=13727, avg=235.94, stdev=196.54 00:11:17.514 clat percentiles (usec): 00:11:17.514 | 1.00th=[ 125], 5.00th=[ 137], 10.00th=[ 159], 20.00th=[ 215], 00:11:17.514 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:11:17.514 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:11:17.514 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 383], 99.95th=[ 457], 00:11:17.514 | 99.99th=[ 1582] 00:11:17.514 bw ( KiB/s): min=16248, max=16544, per=22.36%, avg=16433.33, stdev=110.32, samples=6 00:11:17.514 iops : min= 4062, max= 4136, avg=4108.33, stdev=27.58, samples=6 00:11:17.514 lat (usec) : 250=85.12%, 500=14.84%, 750=0.01% 00:11:17.514 lat (msec) : 2=0.02% 00:11:17.514 cpu : usr=0.79%, sys=3.20%, ctx=13878, majf=0, minf=1 00:11:17.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 issued rwts: total=13870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.514 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72881: Thu Jul 25 07:25:50 2024 00:11:17.514 read: IOPS=6585, BW=25.7MiB/s (27.0MB/s)(90.4MiB/3516msec) 00:11:17.514 slat (usec): min=6, max=14691, avg=11.70, stdev=152.83 00:11:17.514 clat (usec): min=82, max=2255, avg=139.34, stdev=29.80 00:11:17.514 lat (usec): min=89, max=14925, avg=151.04, stdev=156.49 00:11:17.514 clat percentiles (usec): 00:11:17.514 | 1.00th=[ 96], 5.00th=[ 105], 10.00th=[ 122], 20.00th=[ 130], 00:11:17.514 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:11:17.514 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:11:17.514 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 293], 99.95th=[ 490], 00:11:17.514 | 99.99th=[ 1663] 00:11:17.514 bw ( KiB/s): min=25200, max=26584, per=35.26%, avg=25910.33, stdev=680.68, samples=6 00:11:17.514 iops : min= 6300, max= 6646, avg=6477.50, stdev=170.23, samples=6 00:11:17.514 lat (usec) : 100=2.26%, 250=97.62%, 500=0.07%, 750=0.03%, 1000=0.01% 00:11:17.514 lat (msec) : 2=0.01%, 4=0.01% 00:11:17.514 cpu : usr=0.97%, sys=4.98%, ctx=23166, majf=0, minf=1 00:11:17.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 issued rwts: total=23156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.514 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72882: Thu Jul 25 07:25:50 2024 00:11:17.514 read: IOPS=4085, BW=16.0MiB/s (16.7MB/s)(49.1MiB/3076msec) 00:11:17.514 slat (usec): min=5, max=17275, avg= 9.52, stdev=191.52 00:11:17.514 clat (usec): min=130, max=1533, avg=234.52, stdev=25.28 00:11:17.514 lat (usec): min=138, max=17501, avg=244.05, stdev=193.12 00:11:17.514 clat percentiles (usec): 00:11:17.514 | 1.00th=[ 161], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 221], 00:11:17.514 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:11:17.514 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:11:17.514 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 367], 99.95th=[ 408], 00:11:17.514 | 99.99th=[ 1123] 00:11:17.514 bw ( KiB/s): min=16256, max=16536, per=22.38%, avg=16446.60, stdev=120.31, samples=5 00:11:17.514 iops : min= 4064, max= 4134, avg=4111.60, stdev=30.05, samples=5 00:11:17.514 lat (usec) : 250=81.82%, 500=18.14%, 750=0.01% 00:11:17.514 lat (msec) : 2=0.02% 00:11:17.514 cpu : usr=0.46%, sys=2.96%, ctx=12573, majf=0, minf=1 00:11:17.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 issued rwts: total=12566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.514 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72883: Thu Jul 25 07:25:50 2024 00:11:17.514 read: IOPS=5267, BW=20.6MiB/s (21.6MB/s)(58.6MiB/2848msec) 00:11:17.514 slat (usec): min=7, max=104, avg= 9.28, stdev= 2.38 00:11:17.514 clat (usec): min=135, max=2053, avg=179.70, stdev=32.93 00:11:17.514 lat (usec): min=143, max=2061, avg=188.98, stdev=33.48 00:11:17.514 clat percentiles (usec): 00:11:17.514 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:11:17.514 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:11:17.514 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 217], 00:11:17.514 | 99.00th=[ 249], 99.50th=[ 306], 99.90th=[ 510], 99.95th=[ 594], 00:11:17.514 | 99.99th=[ 1926] 00:11:17.514 bw ( KiB/s): min=20040, max=21792, per=28.61%, avg=21024.00, stdev=853.29, samples=5 00:11:17.514 iops : min= 5010, max= 5448, avg=5256.00, stdev=213.32, samples=5 00:11:17.514 lat (usec) : 250=99.00%, 500=0.88%, 750=0.09%, 1000=0.01% 00:11:17.514 lat (msec) : 2=0.01%, 4=0.01% 00:11:17.514 cpu : usr=0.74%, sys=4.25%, ctx=15003, majf=0, minf=2 00:11:17.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.514 issued rwts: total=15003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.514 00:11:17.514 Run status group 0 (all jobs): 00:11:17.514 READ: bw=71.8MiB/s (75.2MB/s), 16.0MiB/s-25.7MiB/s (16.7MB/s-27.0MB/s), io=252MiB (265MB), run=2848-3516msec 00:11:17.514 00:11:17.514 Disk stats (read/write): 00:11:17.514 nvme0n1: ios=12854/0, merge=0/0, ticks=2929/0, in_queue=2929, util=95.22% 00:11:17.514 nvme0n2: ios=22000/0, merge=0/0, ticks=3138/0, in_queue=3138, util=95.45% 00:11:17.514 nvme0n3: ios=11882/0, merge=0/0, ticks=2693/0, in_queue=2693, util=96.71% 00:11:17.514 nvme0n4: ios=13847/0, merge=0/0, ticks=2506/0, in_queue=2506, util=96.52% 00:11:17.774 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.774 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:18.033 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.033 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:18.292 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.292 07:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:18.551 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.551 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 72829 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:18.811 nvmf hotplug test: fio failed as expected 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:18.811 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.071 rmmod nvme_tcp 00:11:19.071 rmmod nvme_fabrics 00:11:19.071 rmmod nvme_keyring 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 72345 ']' 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 72345 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 72345 ']' 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 72345 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72345 00:11:19.071 killing process with pid 72345 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72345' 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 72345 00:11:19.071 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 72345 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.363 07:25:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.363 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:19.363 00:11:19.363 real 0m19.224s 00:11:19.363 user 1m15.375s 00:11:19.363 sys 0m7.466s 00:11:19.363 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.363 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.363 ************************************ 00:11:19.363 END TEST nvmf_fio_target 00:11:19.363 ************************************ 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.624 ************************************ 00:11:19.624 START TEST nvmf_bdevio 00:11:19.624 ************************************ 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.624 * Looking for test storage... 00:11:19.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:19.624 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:19.625 Cannot find device "nvmf_tgt_br" 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.625 Cannot find device "nvmf_tgt_br2" 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:19.625 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:19.885 Cannot find device "nvmf_tgt_br" 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:19.885 Cannot find device "nvmf_tgt_br2" 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:19.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:11:19.885 00:11:19.885 --- 10.0.0.2 ping statistics --- 00:11:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.885 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:19.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:19.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:11:19.885 00:11:19.885 --- 10.0.0.3 ping statistics --- 00:11:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.885 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:19.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:19.885 00:11:19.885 --- 10.0.0.1 ping statistics --- 00:11:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.885 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.885 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=73202 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 73202 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 73202 ']' 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 07:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:20.144 [2024-07-25 07:25:52.697536] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:20.144 [2024-07-25 07:25:52.697612] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.144 [2024-07-25 07:25:52.838900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.403 [2024-07-25 07:25:52.945606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.403 [2024-07-25 07:25:52.945667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.403 [2024-07-25 07:25:52.945675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.403 [2024-07-25 07:25:52.945681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.403 [2024-07-25 07:25:52.945686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.403 [2024-07-25 07:25:52.945779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:20.403 [2024-07-25 07:25:52.946808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:20.403 [2024-07-25 07:25:52.946947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.404 [2024-07-25 07:25:52.946950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.971 [2024-07-25 07:25:53.665446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.971 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.231 Malloc0 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.231 [2024-07-25 07:25:53.743960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:21.231 { 00:11:21.231 "params": { 00:11:21.231 "name": "Nvme$subsystem", 00:11:21.231 "trtype": "$TEST_TRANSPORT", 00:11:21.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.231 "adrfam": "ipv4", 00:11:21.231 "trsvcid": "$NVMF_PORT", 00:11:21.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.231 "hdgst": ${hdgst:-false}, 00:11:21.231 "ddgst": ${ddgst:-false} 00:11:21.231 }, 00:11:21.231 "method": "bdev_nvme_attach_controller" 00:11:21.231 } 00:11:21.231 EOF 00:11:21.231 )") 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:21.231 07:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:21.231 "params": { 00:11:21.231 "name": "Nvme1", 00:11:21.231 "trtype": "tcp", 00:11:21.231 "traddr": "10.0.0.2", 00:11:21.231 "adrfam": "ipv4", 00:11:21.231 "trsvcid": "4420", 00:11:21.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.231 "hdgst": false, 00:11:21.231 "ddgst": false 00:11:21.231 }, 00:11:21.231 "method": "bdev_nvme_attach_controller" 00:11:21.231 }' 00:11:21.231 [2024-07-25 07:25:53.803528] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:21.231 [2024-07-25 07:25:53.803602] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73257 ] 00:11:21.231 [2024-07-25 07:25:53.946695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.491 [2024-07-25 07:25:54.061822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.491 [2024-07-25 07:25:54.061944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.491 [2024-07-25 07:25:54.061946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.491 I/O targets: 00:11:21.491 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:21.491 00:11:21.491 00:11:21.491 CUnit - A unit testing framework for C - Version 2.1-3 00:11:21.491 http://cunit.sourceforge.net/ 00:11:21.491 00:11:21.491 00:11:21.491 Suite: bdevio tests on: Nvme1n1 00:11:21.750 Test: blockdev write read block ...passed 00:11:21.750 Test: blockdev write zeroes read block ...passed 00:11:21.750 Test: blockdev write zeroes read no split ...passed 00:11:21.750 Test: blockdev write zeroes read split ...passed 00:11:21.750 Test: blockdev write zeroes read split partial ...passed 00:11:21.750 Test: blockdev reset ...[2024-07-25 07:25:54.343228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:21.750 [2024-07-25 07:25:54.343363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc0180 (9): Bad file descriptor 00:11:21.750 [2024-07-25 07:25:54.361242] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:21.750 passed 00:11:21.750 Test: blockdev write read 8 blocks ...passed 00:11:21.750 Test: blockdev write read size > 128k ...passed 00:11:21.750 Test: blockdev write read invalid size ...passed 00:11:21.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:21.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:21.750 Test: blockdev write read max offset ...passed 00:11:22.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.010 Test: blockdev writev readv 8 blocks ...passed 00:11:22.010 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.010 Test: blockdev writev readv block ...passed 00:11:22.010 Test: blockdev writev readv size > 128k ...passed 00:11:22.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.010 Test: blockdev comparev and writev ...[2024-07-25 07:25:54.534429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.534485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.534501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.534509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.534768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.534786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.534799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.534806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.535067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.535087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.535100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.535107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.535368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.535390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.535402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.010 [2024-07-25 07:25:54.535409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.010 passed 00:11:22.010 Test: blockdev nvme passthru rw ...passed 00:11:22.010 Test: blockdev nvme passthru vendor specific ...[2024-07-25 07:25:54.618517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.010 [2024-07-25 07:25:54.618566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.618668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.010 [2024-07-25 07:25:54.618680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.618811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.010 [2024-07-25 07:25:54.618830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.010 [2024-07-25 07:25:54.618919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.010 [2024-07-25 07:25:54.618932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.010 passed 00:11:22.010 Test: blockdev nvme admin passthru ...passed 00:11:22.010 Test: blockdev copy ...passed 00:11:22.010 00:11:22.010 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.010 suites 1 1 n/a 0 0 00:11:22.010 tests 23 23 23 0 0 00:11:22.010 asserts 152 152 152 0 n/a 00:11:22.010 00:11:22.010 Elapsed time = 0.909 seconds 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.270 rmmod nvme_tcp 00:11:22.270 rmmod nvme_fabrics 00:11:22.270 rmmod nvme_keyring 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 73202 ']' 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 73202 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 73202 ']' 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 73202 00:11:22.270 07:25:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:22.530 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.530 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73202 00:11:22.530 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:22.530 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:22.530 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73202' 00:11:22.530 killing process with pid 73202 00:11:22.530 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 73202 00:11:22.530 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 73202 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:22.790 00:11:22.790 real 0m3.215s 00:11:22.790 user 0m11.241s 00:11:22.790 sys 0m0.826s 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.790 ************************************ 00:11:22.790 END TEST nvmf_bdevio 00:11:22.790 ************************************ 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:22.790 00:11:22.790 real 3m27.352s 00:11:22.790 user 10m58.316s 00:11:22.790 sys 0m52.341s 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.790 ************************************ 00:11:22.790 END TEST nvmf_target_core 00:11:22.790 ************************************ 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.790 07:25:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:22.790 07:25:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:22.790 07:25:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.790 07:25:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.790 ************************************ 00:11:22.790 START TEST nvmf_target_extra 00:11:22.790 ************************************ 00:11:22.790 07:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:23.078 * Looking for test storage... 00:11:23.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:23.078 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:23.078 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.079 ************************************ 00:11:23.079 START TEST nvmf_example 00:11:23.079 ************************************ 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:23.079 * Looking for test storage... 00:11:23.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:23.079 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.080 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:23.340 Cannot find device "nvmf_tgt_br" 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # true 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.340 Cannot find device "nvmf_tgt_br2" 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # true 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:23.340 Cannot find device "nvmf_tgt_br" 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # true 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:23.340 Cannot find device "nvmf_tgt_br2" 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # true 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:23.340 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:23.340 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:23.340 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:23.341 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:23.341 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:23.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:11:23.601 00:11:23.601 --- 10.0.0.2 ping statistics --- 00:11:23.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.601 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:23.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:23.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:11:23.601 00:11:23.601 --- 10.0.0.3 ping statistics --- 00:11:23.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.601 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:23.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:11:23.601 00:11:23.601 --- 10.0.0.1 ping statistics --- 00:11:23.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.601 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.601 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=73488 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 73488 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 73488 ']' 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.602 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.539 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.539 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:11:24.539 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:24.539 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:24.539 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.799 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.800 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.800 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.800 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.800 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.800 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:11:24.800 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:37.043 Initializing NVMe Controllers 00:11:37.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:37.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:37.043 Initialization complete. Launching workers. 00:11:37.043 ======================================================== 00:11:37.043 Latency(us) 00:11:37.043 Device Information : IOPS MiB/s Average min max 00:11:37.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15398.08 60.15 4155.82 736.72 22070.95 00:11:37.043 ======================================================== 00:11:37.043 Total : 15398.08 60.15 4155.82 736.72 22070.95 00:11:37.043 00:11:37.043 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.044 rmmod nvme_tcp 00:11:37.044 rmmod nvme_fabrics 00:11:37.044 rmmod nvme_keyring 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 73488 ']' 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 73488 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 73488 ']' 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 73488 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73488 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:11:37.044 killing process with pid 73488 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73488' 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 73488 00:11:37.044 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 73488 00:11:37.044 nvmf threads initialize successfully 00:11:37.044 bdev subsystem init successfully 00:11:37.044 created a nvmf target service 00:11:37.044 create targets's poll groups done 00:11:37.044 all subsystems of target started 00:11:37.044 nvmf target is running 00:11:37.044 all subsystems of target stopped 00:11:37.044 destroy targets's poll groups done 00:11:37.044 destroyed the nvmf target service 00:11:37.044 bdev subsystem finish successfully 00:11:37.044 nvmf threads destroy successfully 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.044 00:11:37.044 real 0m12.607s 00:11:37.044 user 0m45.104s 00:11:37.044 sys 0m1.866s 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.044 ************************************ 00:11:37.044 END TEST nvmf_example 00:11:37.044 ************************************ 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.044 ************************************ 00:11:37.044 START TEST nvmf_filesystem 00:11:37.044 ************************************ 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:37.044 * Looking for test storage... 00:11:37.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:37.044 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:37.045 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:37.046 #define SPDK_CONFIG_H 00:11:37.046 #define SPDK_CONFIG_APPS 1 00:11:37.046 #define SPDK_CONFIG_ARCH native 00:11:37.046 #undef SPDK_CONFIG_ASAN 00:11:37.046 #define SPDK_CONFIG_AVAHI 1 00:11:37.046 #undef SPDK_CONFIG_CET 00:11:37.046 #define SPDK_CONFIG_COVERAGE 1 00:11:37.046 #define SPDK_CONFIG_CROSS_PREFIX 00:11:37.046 #undef SPDK_CONFIG_CRYPTO 00:11:37.046 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:37.046 #undef SPDK_CONFIG_CUSTOMOCF 00:11:37.046 #undef SPDK_CONFIG_DAOS 00:11:37.046 #define SPDK_CONFIG_DAOS_DIR 00:11:37.046 #define SPDK_CONFIG_DEBUG 1 00:11:37.046 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:37.046 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:37.046 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:37.046 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:37.046 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:37.046 #undef SPDK_CONFIG_DPDK_UADK 00:11:37.046 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:37.046 #define SPDK_CONFIG_EXAMPLES 1 00:11:37.046 #undef SPDK_CONFIG_FC 00:11:37.046 #define SPDK_CONFIG_FC_PATH 00:11:37.046 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:37.046 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:37.046 #undef SPDK_CONFIG_FUSE 00:11:37.046 #undef SPDK_CONFIG_FUZZER 00:11:37.046 #define SPDK_CONFIG_FUZZER_LIB 00:11:37.046 #define SPDK_CONFIG_GOLANG 1 00:11:37.046 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:37.046 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:37.046 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:37.046 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:37.046 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:37.046 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:37.046 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:37.046 #define SPDK_CONFIG_IDXD 1 00:11:37.046 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:37.046 #undef SPDK_CONFIG_IPSEC_MB 00:11:37.046 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:37.046 #define SPDK_CONFIG_ISAL 1 00:11:37.046 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:37.046 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:37.046 #define SPDK_CONFIG_LIBDIR 00:11:37.046 #undef SPDK_CONFIG_LTO 00:11:37.046 #define SPDK_CONFIG_MAX_LCORES 128 00:11:37.046 #define SPDK_CONFIG_NVME_CUSE 1 00:11:37.046 #undef SPDK_CONFIG_OCF 00:11:37.046 #define SPDK_CONFIG_OCF_PATH 00:11:37.046 #define SPDK_CONFIG_OPENSSL_PATH 00:11:37.046 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:37.046 #define SPDK_CONFIG_PGO_DIR 00:11:37.046 #undef SPDK_CONFIG_PGO_USE 00:11:37.046 #define SPDK_CONFIG_PREFIX /usr/local 00:11:37.046 #undef SPDK_CONFIG_RAID5F 00:11:37.046 #undef SPDK_CONFIG_RBD 00:11:37.046 #define SPDK_CONFIG_RDMA 1 00:11:37.046 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:37.046 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:37.046 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:37.046 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:37.046 #define SPDK_CONFIG_SHARED 1 00:11:37.046 #undef SPDK_CONFIG_SMA 00:11:37.046 #define SPDK_CONFIG_TESTS 1 00:11:37.046 #undef SPDK_CONFIG_TSAN 00:11:37.046 #define SPDK_CONFIG_UBLK 1 00:11:37.046 #define SPDK_CONFIG_UBSAN 1 00:11:37.046 #undef SPDK_CONFIG_UNIT_TESTS 00:11:37.046 #undef SPDK_CONFIG_URING 00:11:37.046 #define SPDK_CONFIG_URING_PATH 00:11:37.046 #undef SPDK_CONFIG_URING_ZNS 00:11:37.046 #define SPDK_CONFIG_USDT 1 00:11:37.046 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:37.046 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:37.046 #undef SPDK_CONFIG_VFIO_USER 00:11:37.046 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:37.046 #define SPDK_CONFIG_VHOST 1 00:11:37.046 #define SPDK_CONFIG_VIRTIO 1 00:11:37.046 #undef SPDK_CONFIG_VTUNE 00:11:37.046 #define SPDK_CONFIG_VTUNE_DIR 00:11:37.046 #define SPDK_CONFIG_WERROR 1 00:11:37.046 #define SPDK_CONFIG_WPDK_DIR 00:11:37.046 #undef SPDK_CONFIG_XNVME 00:11:37.046 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.046 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:37.047 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:37.048 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:11:37.049 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 73729 ]] 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 73729 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.GE2bmO 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.GE2bmO/tests/target /tmp/spdk.GE2bmO 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6257971200 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2487009280 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=20148224 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13776715776 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5253300224 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13776715776 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5253300224 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:37.050 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267760640 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92384354304 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7318425600 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:11:37.051 * Looking for test storage... 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13776715776 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.051 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:37.052 Cannot find device "nvmf_tgt_br" 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:37.052 Cannot find device "nvmf_tgt_br2" 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:37.052 Cannot find device "nvmf_tgt_br" 00:11:37.052 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:37.053 Cannot find device "nvmf_tgt_br2" 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:37.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:37.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:37.053 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:37.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:37.053 00:11:37.053 --- 10.0.0.2 ping statistics --- 00:11:37.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.053 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:37.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:37.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:11:37.053 00:11:37.053 --- 10.0.0.3 ping statistics --- 00:11:37.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.053 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:37.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:37.053 00:11:37.053 --- 10.0.0.1 ping statistics --- 00:11:37.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.053 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 ************************************ 00:11:37.053 START TEST nvmf_filesystem_no_in_capsule 00:11:37.053 ************************************ 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=73888 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 73888 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.053 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 73888 ']' 00:11:37.054 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.054 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.054 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.054 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.054 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 [2024-07-25 07:26:09.162070] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:37.054 [2024-07-25 07:26:09.162164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.054 [2024-07-25 07:26:09.302528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.054 [2024-07-25 07:26:09.412754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.054 [2024-07-25 07:26:09.412804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.054 [2024-07-25 07:26:09.412814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.054 [2024-07-25 07:26:09.412821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.054 [2024-07-25 07:26:09.412828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.054 [2024-07-25 07:26:09.412980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.054 [2024-07-25 07:26:09.413199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.054 [2024-07-25 07:26:09.413053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.054 [2024-07-25 07:26:09.413202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 [2024-07-25 07:26:10.163390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 Malloc1 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 [2024-07-25 07:26:10.333836] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:37.643 { 00:11:37.643 "aliases": [ 00:11:37.643 "18e67d29-4ae6-4737-a861-35e454be0d7d" 00:11:37.643 ], 00:11:37.643 "assigned_rate_limits": { 00:11:37.643 "r_mbytes_per_sec": 0, 00:11:37.643 "rw_ios_per_sec": 0, 00:11:37.643 "rw_mbytes_per_sec": 0, 00:11:37.643 "w_mbytes_per_sec": 0 00:11:37.643 }, 00:11:37.643 "block_size": 512, 00:11:37.643 "claim_type": "exclusive_write", 00:11:37.643 "claimed": true, 00:11:37.643 "driver_specific": {}, 00:11:37.643 "memory_domains": [ 00:11:37.643 { 00:11:37.643 "dma_device_id": "system", 00:11:37.643 "dma_device_type": 1 00:11:37.643 }, 00:11:37.643 { 00:11:37.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.643 "dma_device_type": 2 00:11:37.643 } 00:11:37.643 ], 00:11:37.643 "name": "Malloc1", 00:11:37.643 "num_blocks": 1048576, 00:11:37.643 "product_name": "Malloc disk", 00:11:37.643 "supported_io_types": { 00:11:37.643 "abort": true, 00:11:37.643 "compare": false, 00:11:37.643 "compare_and_write": false, 00:11:37.643 "copy": true, 00:11:37.643 "flush": true, 00:11:37.643 "get_zone_info": false, 00:11:37.643 "nvme_admin": false, 00:11:37.643 "nvme_io": false, 00:11:37.643 "nvme_io_md": false, 00:11:37.643 "nvme_iov_md": false, 00:11:37.643 "read": true, 00:11:37.643 "reset": true, 00:11:37.643 "seek_data": false, 00:11:37.643 "seek_hole": false, 00:11:37.643 "unmap": true, 00:11:37.643 "write": true, 00:11:37.643 "write_zeroes": true, 00:11:37.643 "zcopy": true, 00:11:37.643 "zone_append": false, 00:11:37.643 "zone_management": false 00:11:37.643 }, 00:11:37.643 "uuid": "18e67d29-4ae6-4737-a861-35e454be0d7d", 00:11:37.643 "zoned": false 00:11:37.643 } 00:11:37.643 ]' 00:11:37.643 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.902 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.903 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:37.903 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.903 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:37.903 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:40.439 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.377 ************************************ 00:11:41.377 START TEST filesystem_ext4 00:11:41.377 ************************************ 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:41.377 mke2fs 1.46.5 (30-Dec-2021) 00:11:41.377 Discarding device blocks: 0/522240 done 00:11:41.377 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:41.377 Filesystem UUID: ae9942fe-4a23-4b06-96d5-5d8f16aea7f7 00:11:41.377 Superblock backups stored on blocks: 00:11:41.377 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:41.377 00:11:41.377 Allocating group tables: 0/64 done 00:11:41.377 Writing inode tables: 0/64 done 00:11:41.377 Creating journal (8192 blocks): done 00:11:41.377 Writing superblocks and filesystem accounting information: 0/64 done 00:11:41.377 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:41.377 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.377 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 73888 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.637 00:11:41.637 real 0m0.342s 00:11:41.637 user 0m0.017s 00:11:41.637 sys 0m0.052s 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:41.637 ************************************ 00:11:41.637 END TEST filesystem_ext4 00:11:41.637 ************************************ 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.637 ************************************ 00:11:41.637 START TEST filesystem_btrfs 00:11:41.637 ************************************ 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:41.637 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:41.637 btrfs-progs v6.6.2 00:11:41.637 See https://btrfs.readthedocs.io for more information. 00:11:41.637 00:11:41.637 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:41.637 NOTE: several default settings have changed in version 5.15, please make sure 00:11:41.637 this does not affect your deployments: 00:11:41.637 - DUP for metadata (-m dup) 00:11:41.637 - enabled no-holes (-O no-holes) 00:11:41.637 - enabled free-space-tree (-R free-space-tree) 00:11:41.637 00:11:41.637 Label: (null) 00:11:41.637 UUID: dec665f8-0b12-4cdd-b044-dd88ecd73aed 00:11:41.637 Node size: 16384 00:11:41.637 Sector size: 4096 00:11:41.637 Filesystem size: 510.00MiB 00:11:41.637 Block group profiles: 00:11:41.638 Data: single 8.00MiB 00:11:41.638 Metadata: DUP 32.00MiB 00:11:41.638 System: DUP 8.00MiB 00:11:41.638 SSD detected: yes 00:11:41.638 Zoned device: no 00:11:41.638 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:41.638 Runtime features: free-space-tree 00:11:41.638 Checksum: crc32c 00:11:41.638 Number of devices: 1 00:11:41.638 Devices: 00:11:41.638 ID SIZE PATH 00:11:41.638 1 510.00MiB /dev/nvme0n1p1 00:11:41.638 00:11:41.638 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:41.638 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.638 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.638 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:41.638 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 73888 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.897 00:11:41.897 real 0m0.204s 00:11:41.897 user 0m0.024s 00:11:41.897 sys 0m0.073s 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.897 ************************************ 00:11:41.897 END TEST filesystem_btrfs 00:11:41.897 ************************************ 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.897 ************************************ 00:11:41.897 START TEST filesystem_xfs 00:11:41.897 ************************************ 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:41.897 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:41.897 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:41.897 = sectsz=512 attr=2, projid32bit=1 00:11:41.897 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:41.897 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:41.897 data = bsize=4096 blocks=130560, imaxpct=25 00:11:41.897 = sunit=0 swidth=0 blks 00:11:41.897 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:41.897 log =internal log bsize=4096 blocks=16384, version=2 00:11:41.897 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:41.897 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:42.467 Discarding blocks...Done. 00:11:42.467 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:42.467 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.003 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.003 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:45.003 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.003 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 73888 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.004 00:11:45.004 real 0m2.980s 00:11:45.004 user 0m0.028s 00:11:45.004 sys 0m0.059s 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:45.004 ************************************ 00:11:45.004 END TEST filesystem_xfs 00:11:45.004 ************************************ 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 73888 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 73888 ']' 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 73888 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73888 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:45.004 killing process with pid 73888 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73888' 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 73888 00:11:45.004 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 73888 00:11:45.298 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:45.298 00:11:45.298 real 0m8.903s 00:11:45.298 user 0m34.124s 00:11:45.298 sys 0m1.223s 00:11:45.298 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.298 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.298 ************************************ 00:11:45.298 END TEST nvmf_filesystem_no_in_capsule 00:11:45.298 ************************************ 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.581 ************************************ 00:11:45.581 START TEST nvmf_filesystem_in_capsule 00:11:45.581 ************************************ 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=74196 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 74196 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 74196 ']' 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.581 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.581 [2024-07-25 07:26:18.124298] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:45.582 [2024-07-25 07:26:18.124370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.582 [2024-07-25 07:26:18.262089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.840 [2024-07-25 07:26:18.358733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.841 [2024-07-25 07:26:18.358800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.841 [2024-07-25 07:26:18.358806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.841 [2024-07-25 07:26:18.358811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.841 [2024-07-25 07:26:18.358815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.841 [2024-07-25 07:26:18.359204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.841 [2024-07-25 07:26:18.359436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.841 [2024-07-25 07:26:18.359531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.841 [2024-07-25 07:26:18.359536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.409 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.409 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:46.409 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:46.409 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:46.409 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.409 [2024-07-25 07:26:19.041474] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.409 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.668 Malloc1 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.668 [2024-07-25 07:26:19.210902] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.668 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:46.668 { 00:11:46.668 "aliases": [ 00:11:46.668 "38db38d2-96ec-45e2-84d2-1773374dde94" 00:11:46.668 ], 00:11:46.668 "assigned_rate_limits": { 00:11:46.668 "r_mbytes_per_sec": 0, 00:11:46.668 "rw_ios_per_sec": 0, 00:11:46.668 "rw_mbytes_per_sec": 0, 00:11:46.668 "w_mbytes_per_sec": 0 00:11:46.668 }, 00:11:46.668 "block_size": 512, 00:11:46.668 "claim_type": "exclusive_write", 00:11:46.668 "claimed": true, 00:11:46.668 "driver_specific": {}, 00:11:46.668 "memory_domains": [ 00:11:46.668 { 00:11:46.668 "dma_device_id": "system", 00:11:46.668 "dma_device_type": 1 00:11:46.668 }, 00:11:46.668 { 00:11:46.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.668 "dma_device_type": 2 00:11:46.668 } 00:11:46.668 ], 00:11:46.668 "name": "Malloc1", 00:11:46.668 "num_blocks": 1048576, 00:11:46.668 "product_name": "Malloc disk", 00:11:46.669 "supported_io_types": { 00:11:46.669 "abort": true, 00:11:46.669 "compare": false, 00:11:46.669 "compare_and_write": false, 00:11:46.669 "copy": true, 00:11:46.669 "flush": true, 00:11:46.669 "get_zone_info": false, 00:11:46.669 "nvme_admin": false, 00:11:46.669 "nvme_io": false, 00:11:46.669 "nvme_io_md": false, 00:11:46.669 "nvme_iov_md": false, 00:11:46.669 "read": true, 00:11:46.669 "reset": true, 00:11:46.669 "seek_data": false, 00:11:46.669 "seek_hole": false, 00:11:46.669 "unmap": true, 00:11:46.669 "write": true, 00:11:46.669 "write_zeroes": true, 00:11:46.669 "zcopy": true, 00:11:46.669 "zone_append": false, 00:11:46.669 "zone_management": false 00:11:46.669 }, 00:11:46.669 "uuid": "38db38d2-96ec-45e2-84d2-1773374dde94", 00:11:46.669 "zoned": false 00:11:46.669 } 00:11:46.669 ]' 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:46.669 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.928 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.928 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:46.928 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.928 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:46.928 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:48.834 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:48.835 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:48.835 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:49.093 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:49.093 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.060 ************************************ 00:11:50.060 START TEST filesystem_in_capsule_ext4 00:11:50.060 ************************************ 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:50.060 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:50.061 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:50.061 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:50.061 mke2fs 1.46.5 (30-Dec-2021) 00:11:50.061 Discarding device blocks: 0/522240 done 00:11:50.061 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:50.061 Filesystem UUID: e25411b4-2029-4e67-810e-a8071ca8d3a8 00:11:50.061 Superblock backups stored on blocks: 00:11:50.061 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:50.061 00:11:50.061 Allocating group tables: 0/64 done 00:11:50.061 Writing inode tables: 0/64 done 00:11:50.319 Creating journal (8192 blocks): done 00:11:50.319 Writing superblocks and filesystem accounting information: 0/64 done 00:11:50.319 00:11:50.319 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:50.319 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.319 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.319 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 74196 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.319 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.578 00:11:50.578 real 0m0.360s 00:11:50.578 user 0m0.037s 00:11:50.578 sys 0m0.064s 00:11:50.578 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.578 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:50.578 ************************************ 00:11:50.578 END TEST filesystem_in_capsule_ext4 00:11:50.578 ************************************ 00:11:50.578 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:50.578 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:50.578 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 ************************************ 00:11:50.579 START TEST filesystem_in_capsule_btrfs 00:11:50.579 ************************************ 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:50.579 btrfs-progs v6.6.2 00:11:50.579 See https://btrfs.readthedocs.io for more information. 00:11:50.579 00:11:50.579 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:50.579 NOTE: several default settings have changed in version 5.15, please make sure 00:11:50.579 this does not affect your deployments: 00:11:50.579 - DUP for metadata (-m dup) 00:11:50.579 - enabled no-holes (-O no-holes) 00:11:50.579 - enabled free-space-tree (-R free-space-tree) 00:11:50.579 00:11:50.579 Label: (null) 00:11:50.579 UUID: 8d2c14cb-f272-4824-b7de-89d41664f7e3 00:11:50.579 Node size: 16384 00:11:50.579 Sector size: 4096 00:11:50.579 Filesystem size: 510.00MiB 00:11:50.579 Block group profiles: 00:11:50.579 Data: single 8.00MiB 00:11:50.579 Metadata: DUP 32.00MiB 00:11:50.579 System: DUP 8.00MiB 00:11:50.579 SSD detected: yes 00:11:50.579 Zoned device: no 00:11:50.579 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:50.579 Runtime features: free-space-tree 00:11:50.579 Checksum: crc32c 00:11:50.579 Number of devices: 1 00:11:50.579 Devices: 00:11:50.579 ID SIZE PATH 00:11:50.579 1 510.00MiB /dev/nvme0n1p1 00:11:50.579 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:50.579 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 74196 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.838 00:11:50.838 real 0m0.322s 00:11:50.838 user 0m0.027s 00:11:50.838 sys 0m0.094s 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:50.838 ************************************ 00:11:50.838 END TEST filesystem_in_capsule_btrfs 00:11:50.838 ************************************ 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.838 ************************************ 00:11:50.838 START TEST filesystem_in_capsule_xfs 00:11:50.838 ************************************ 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:50.838 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:51.096 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:51.096 = sectsz=512 attr=2, projid32bit=1 00:11:51.096 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:51.096 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:51.097 data = bsize=4096 blocks=130560, imaxpct=25 00:11:51.097 = sunit=0 swidth=0 blks 00:11:51.097 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:51.097 log =internal log bsize=4096 blocks=16384, version=2 00:11:51.097 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:51.097 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:51.663 Discarding blocks...Done. 00:11:51.663 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:51.663 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 74196 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.565 ************************************ 00:11:53.565 END TEST filesystem_in_capsule_xfs 00:11:53.565 ************************************ 00:11:53.565 00:11:53.565 real 0m2.618s 00:11:53.565 user 0m0.030s 00:11:53.565 sys 0m0.067s 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.565 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 74196 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 74196 ']' 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 74196 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74196 00:11:53.824 killing process with pid 74196 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74196' 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 74196 00:11:53.824 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 74196 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:54.083 00:11:54.083 real 0m8.636s 00:11:54.083 user 0m33.137s 00:11:54.083 sys 0m1.267s 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.083 ************************************ 00:11:54.083 END TEST nvmf_filesystem_in_capsule 00:11:54.083 ************************************ 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.083 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.083 rmmod nvme_tcp 00:11:54.083 rmmod nvme_fabrics 00:11:54.344 rmmod nvme_keyring 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:54.344 00:11:54.344 real 0m18.603s 00:11:54.344 user 1m7.580s 00:11:54.344 sys 0m3.071s 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.344 ************************************ 00:11:54.344 END TEST nvmf_filesystem 00:11:54.344 ************************************ 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.344 ************************************ 00:11:54.344 START TEST nvmf_target_discovery 00:11:54.344 ************************************ 00:11:54.344 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:54.344 * Looking for test storage... 00:11:54.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:54.344 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.603 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.603 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.603 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:54.604 Cannot find device "nvmf_tgt_br" 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.604 Cannot find device "nvmf_tgt_br2" 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:54.604 Cannot find device "nvmf_tgt_br" 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:54.604 Cannot find device "nvmf_tgt_br2" 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:54.604 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:54.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:54.864 00:11:54.864 --- 10.0.0.2 ping statistics --- 00:11:54.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.864 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:54.864 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:54.864 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:11:54.864 00:11:54.864 --- 10.0.0.3 ping statistics --- 00:11:54.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.864 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:54.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:11:54.864 00:11:54.864 --- 10.0.0.1 ping statistics --- 00:11:54.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.864 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:54.864 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=74651 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 74651 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 74651 ']' 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.865 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.865 [2024-07-25 07:26:27.505383] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:54.865 [2024-07-25 07:26:27.505450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.123 [2024-07-25 07:26:27.646380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.123 [2024-07-25 07:26:27.750905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.123 [2024-07-25 07:26:27.750973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.123 [2024-07-25 07:26:27.750981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.123 [2024-07-25 07:26:27.750987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.123 [2024-07-25 07:26:27.750992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.123 [2024-07-25 07:26:27.751192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.123 [2024-07-25 07:26:27.751419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.123 [2024-07-25 07:26:27.751653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.123 [2024-07-25 07:26:27.751677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.690 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.690 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:55.690 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.690 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:55.690 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.949 [2024-07-25 07:26:28.466494] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.949 Null1 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.949 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 [2024-07-25 07:26:28.536478] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 Null2 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 Null3 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 Null4 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.950 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 4420 00:11:56.210 00:11:56.210 Discovery Log Number of Records 6, Generation counter 6 00:11:56.210 =====Discovery Log Entry 0====== 00:11:56.210 trtype: tcp 00:11:56.210 adrfam: ipv4 00:11:56.210 subtype: current discovery subsystem 00:11:56.210 treq: not required 00:11:56.210 portid: 0 00:11:56.210 trsvcid: 4420 00:11:56.210 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:56.210 traddr: 10.0.0.2 00:11:56.210 eflags: explicit discovery connections, duplicate discovery information 00:11:56.210 sectype: none 00:11:56.210 =====Discovery Log Entry 1====== 00:11:56.210 trtype: tcp 00:11:56.210 adrfam: ipv4 00:11:56.210 subtype: nvme subsystem 00:11:56.210 treq: not required 00:11:56.210 portid: 0 00:11:56.210 trsvcid: 4420 00:11:56.210 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:56.210 traddr: 10.0.0.2 00:11:56.210 eflags: none 00:11:56.210 sectype: none 00:11:56.210 =====Discovery Log Entry 2====== 00:11:56.210 trtype: tcp 00:11:56.210 adrfam: ipv4 00:11:56.210 subtype: nvme subsystem 00:11:56.210 treq: not required 00:11:56.210 portid: 0 00:11:56.210 trsvcid: 4420 00:11:56.210 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:56.210 traddr: 10.0.0.2 00:11:56.210 eflags: none 00:11:56.210 sectype: none 00:11:56.210 =====Discovery Log Entry 3====== 00:11:56.210 trtype: tcp 00:11:56.210 adrfam: ipv4 00:11:56.210 subtype: nvme subsystem 00:11:56.210 treq: not required 00:11:56.210 portid: 0 00:11:56.210 trsvcid: 4420 00:11:56.210 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:56.210 traddr: 10.0.0.2 00:11:56.210 eflags: none 00:11:56.210 sectype: none 00:11:56.210 =====Discovery Log Entry 4====== 00:11:56.210 trtype: tcp 00:11:56.210 adrfam: ipv4 00:11:56.210 subtype: nvme subsystem 00:11:56.210 treq: not required 00:11:56.210 portid: 0 00:11:56.210 trsvcid: 4420 00:11:56.210 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:56.210 traddr: 10.0.0.2 00:11:56.210 eflags: none 00:11:56.210 sectype: none 00:11:56.210 =====Discovery Log Entry 5====== 00:11:56.210 trtype: tcp 00:11:56.210 adrfam: ipv4 00:11:56.210 subtype: discovery subsystem referral 00:11:56.210 treq: not required 00:11:56.210 portid: 0 00:11:56.210 trsvcid: 4430 00:11:56.210 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:56.210 traddr: 10.0.0.2 00:11:56.210 eflags: none 00:11:56.210 sectype: none 00:11:56.210 Perform nvmf subsystem discovery via RPC 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.210 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.210 [ 00:11:56.210 { 00:11:56.210 "allow_any_host": true, 00:11:56.210 "hosts": [], 00:11:56.210 "listen_addresses": [ 00:11:56.210 { 00:11:56.210 "adrfam": "IPv4", 00:11:56.210 "traddr": "10.0.0.2", 00:11:56.210 "trsvcid": "4420", 00:11:56.211 "trtype": "TCP" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:56.211 "subtype": "Discovery" 00:11:56.211 }, 00:11:56.211 { 00:11:56.211 "allow_any_host": true, 00:11:56.211 "hosts": [], 00:11:56.211 "listen_addresses": [ 00:11:56.211 { 00:11:56.211 "adrfam": "IPv4", 00:11:56.211 "traddr": "10.0.0.2", 00:11:56.211 "trsvcid": "4420", 00:11:56.211 "trtype": "TCP" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "max_cntlid": 65519, 00:11:56.211 "max_namespaces": 32, 00:11:56.211 "min_cntlid": 1, 00:11:56.211 "model_number": "SPDK bdev Controller", 00:11:56.211 "namespaces": [ 00:11:56.211 { 00:11:56.211 "bdev_name": "Null1", 00:11:56.211 "name": "Null1", 00:11:56.211 "nguid": "203387A5F1504A9BBAE6311D32061239", 00:11:56.211 "nsid": 1, 00:11:56.211 "uuid": "203387a5-f150-4a9b-bae6-311d32061239" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.211 "serial_number": "SPDK00000000000001", 00:11:56.211 "subtype": "NVMe" 00:11:56.211 }, 00:11:56.211 { 00:11:56.211 "allow_any_host": true, 00:11:56.211 "hosts": [], 00:11:56.211 "listen_addresses": [ 00:11:56.211 { 00:11:56.211 "adrfam": "IPv4", 00:11:56.211 "traddr": "10.0.0.2", 00:11:56.211 "trsvcid": "4420", 00:11:56.211 "trtype": "TCP" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "max_cntlid": 65519, 00:11:56.211 "max_namespaces": 32, 00:11:56.211 "min_cntlid": 1, 00:11:56.211 "model_number": "SPDK bdev Controller", 00:11:56.211 "namespaces": [ 00:11:56.211 { 00:11:56.211 "bdev_name": "Null2", 00:11:56.211 "name": "Null2", 00:11:56.211 "nguid": "D1FECFD20E8F4B87BEF438BF8BA7120F", 00:11:56.211 "nsid": 1, 00:11:56.211 "uuid": "d1fecfd2-0e8f-4b87-bef4-38bf8ba7120f" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:56.211 "serial_number": "SPDK00000000000002", 00:11:56.211 "subtype": "NVMe" 00:11:56.211 }, 00:11:56.211 { 00:11:56.211 "allow_any_host": true, 00:11:56.211 "hosts": [], 00:11:56.211 "listen_addresses": [ 00:11:56.211 { 00:11:56.211 "adrfam": "IPv4", 00:11:56.211 "traddr": "10.0.0.2", 00:11:56.211 "trsvcid": "4420", 00:11:56.211 "trtype": "TCP" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "max_cntlid": 65519, 00:11:56.211 "max_namespaces": 32, 00:11:56.211 "min_cntlid": 1, 00:11:56.211 "model_number": "SPDK bdev Controller", 00:11:56.211 "namespaces": [ 00:11:56.211 { 00:11:56.211 "bdev_name": "Null3", 00:11:56.211 "name": "Null3", 00:11:56.211 "nguid": "FB76A47D549F46128E669780373372B8", 00:11:56.211 "nsid": 1, 00:11:56.211 "uuid": "fb76a47d-549f-4612-8e66-9780373372b8" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:56.211 "serial_number": "SPDK00000000000003", 00:11:56.211 "subtype": "NVMe" 00:11:56.211 }, 00:11:56.211 { 00:11:56.211 "allow_any_host": true, 00:11:56.211 "hosts": [], 00:11:56.211 "listen_addresses": [ 00:11:56.211 { 00:11:56.211 "adrfam": "IPv4", 00:11:56.211 "traddr": "10.0.0.2", 00:11:56.211 "trsvcid": "4420", 00:11:56.211 "trtype": "TCP" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "max_cntlid": 65519, 00:11:56.211 "max_namespaces": 32, 00:11:56.211 "min_cntlid": 1, 00:11:56.211 "model_number": "SPDK bdev Controller", 00:11:56.211 "namespaces": [ 00:11:56.211 { 00:11:56.211 "bdev_name": "Null4", 00:11:56.211 "name": "Null4", 00:11:56.211 "nguid": "FAE34140380641398B213B277E0AB7FF", 00:11:56.211 "nsid": 1, 00:11:56.211 "uuid": "fae34140-3806-4139-8b21-3b277e0ab7ff" 00:11:56.211 } 00:11:56.211 ], 00:11:56.211 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:56.211 "serial_number": "SPDK00000000000004", 00:11:56.211 "subtype": "NVMe" 00:11:56.211 } 00:11:56.211 ] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.211 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.470 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.470 rmmod nvme_tcp 00:11:56.470 rmmod nvme_fabrics 00:11:56.470 rmmod nvme_keyring 00:11:56.470 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.470 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:56.470 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:56.470 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 74651 ']' 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 74651 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 74651 ']' 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 74651 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74651 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:56.471 killing process with pid 74651 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74651' 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 74651 00:11:56.471 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 74651 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:56.730 00:11:56.730 real 0m2.357s 00:11:56.730 user 0m6.364s 00:11:56.730 sys 0m0.640s 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.730 ************************************ 00:11:56.730 END TEST nvmf_target_discovery 00:11:56.730 ************************************ 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.730 ************************************ 00:11:56.730 START TEST nvmf_referrals 00:11:56.730 ************************************ 00:11:56.730 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:56.989 * Looking for test storage... 00:11:56.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.989 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:56.990 Cannot find device "nvmf_tgt_br" 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.990 Cannot find device "nvmf_tgt_br2" 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:56.990 Cannot find device "nvmf_tgt_br" 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:56.990 Cannot find device "nvmf_tgt_br2" 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:56.990 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:57.249 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:57.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:11:57.249 00:11:57.249 --- 10.0.0.2 ping statistics --- 00:11:57.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.250 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:57.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:57.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:11:57.250 00:11:57.250 --- 10.0.0.3 ping statistics --- 00:11:57.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.250 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:57.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:11:57.250 00:11:57.250 --- 10.0.0.1 ping statistics --- 00:11:57.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.250 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=74873 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 74873 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 74873 ']' 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.250 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.509 [2024-07-25 07:26:30.017928] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:11:57.509 [2024-07-25 07:26:30.017989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.509 [2024-07-25 07:26:30.160063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.768 [2024-07-25 07:26:30.252707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.768 [2024-07-25 07:26:30.252755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.768 [2024-07-25 07:26:30.252762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.768 [2024-07-25 07:26:30.252766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.768 [2024-07-25 07:26:30.252770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.768 [2024-07-25 07:26:30.252984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.768 [2024-07-25 07:26:30.253074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.768 [2024-07-25 07:26:30.253169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.768 [2024-07-25 07:26:30.253174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.336 [2024-07-25 07:26:30.946764] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.336 [2024-07-25 07:26:30.985548] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.336 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.336 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.596 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.856 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:59.115 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:59.374 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:59.374 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.375 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.375 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.636 rmmod nvme_tcp 00:11:59.636 rmmod nvme_fabrics 00:11:59.636 rmmod nvme_keyring 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 74873 ']' 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 74873 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 74873 ']' 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 74873 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74873 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:59.636 killing process with pid 74873 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74873' 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 74873 00:11:59.636 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 74873 00:11:59.908 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:59.908 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:59.909 00:11:59.909 real 0m3.164s 00:11:59.909 user 0m9.833s 00:11:59.909 sys 0m1.001s 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.909 ************************************ 00:11:59.909 END TEST nvmf_referrals 00:11:59.909 ************************************ 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.909 ************************************ 00:11:59.909 START TEST nvmf_connect_disconnect 00:11:59.909 ************************************ 00:11:59.909 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:00.169 * Looking for test storage... 00:12:00.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.169 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:00.170 Cannot find device "nvmf_tgt_br" 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:00.170 Cannot find device "nvmf_tgt_br2" 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:00.170 Cannot find device "nvmf_tgt_br" 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:00.170 Cannot find device "nvmf_tgt_br2" 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:00.170 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:00.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:00.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:00.430 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:00.430 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:00.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:12:00.431 00:12:00.431 --- 10.0.0.2 ping statistics --- 00:12:00.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.431 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:00.431 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:00.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:00.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:12:00.691 00:12:00.691 --- 10.0.0.3 ping statistics --- 00:12:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.691 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:00.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:00.691 00:12:00.691 --- 10.0.0.1 ping statistics --- 00:12:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.691 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=75176 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 75176 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 75176 ']' 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.691 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.691 [2024-07-25 07:26:33.273605] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:00.691 [2024-07-25 07:26:33.273671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.691 [2024-07-25 07:26:33.414050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.949 [2024-07-25 07:26:33.506416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.949 [2024-07-25 07:26:33.506461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.949 [2024-07-25 07:26:33.506467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.949 [2024-07-25 07:26:33.506472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.949 [2024-07-25 07:26:33.506476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.949 [2024-07-25 07:26:33.506679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.949 [2024-07-25 07:26:33.506867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.949 [2024-07-25 07:26:33.507718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.949 [2024-07-25 07:26:33.507720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.516 [2024-07-25 07:26:34.195225] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.516 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.775 [2024-07-25 07:26:34.263702] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:01.775 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:04.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.228 rmmod nvme_tcp 00:12:13.228 rmmod nvme_fabrics 00:12:13.228 rmmod nvme_keyring 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 75176 ']' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 75176 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 75176 ']' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 75176 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75176 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.228 killing process with pid 75176 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75176' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 75176 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 75176 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:13.228 00:12:13.228 real 0m13.247s 00:12:13.228 user 0m48.677s 00:12:13.228 sys 0m1.453s 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.228 ************************************ 00:12:13.228 END TEST nvmf_connect_disconnect 00:12:13.228 ************************************ 00:12:13.228 07:26:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.229 07:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:13.229 07:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.229 07:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.229 ************************************ 00:12:13.229 START TEST nvmf_multitarget 00:12:13.229 ************************************ 00:12:13.229 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.489 * Looking for test storage... 00:12:13.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:13.489 Cannot find device "nvmf_tgt_br" 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:13.489 Cannot find device "nvmf_tgt_br2" 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:13.489 Cannot find device "nvmf_tgt_br" 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:13.489 Cannot find device "nvmf_tgt_br2" 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:13.489 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:13.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:13.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:13.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:12:13.749 00:12:13.749 --- 10.0.0.2 ping statistics --- 00:12:13.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.749 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:13.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:13.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:13.749 00:12:13.749 --- 10.0.0.3 ping statistics --- 00:12:13.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.749 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:13.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:13.749 00:12:13.749 --- 10.0.0.1 ping statistics --- 00:12:13.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.749 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.749 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=75575 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 75575 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 75575 ']' 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.010 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:14.010 [2024-07-25 07:26:46.532350] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:14.010 [2024-07-25 07:26:46.532431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.010 [2024-07-25 07:26:46.671213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.270 [2024-07-25 07:26:46.772253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.270 [2024-07-25 07:26:46.772319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.270 [2024-07-25 07:26:46.772326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.270 [2024-07-25 07:26:46.772331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.270 [2024-07-25 07:26:46.772334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.270 [2024-07-25 07:26:46.772465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.270 [2024-07-25 07:26:46.772679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.270 [2024-07-25 07:26:46.772814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.270 [2024-07-25 07:26:46.772819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:14.839 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:15.098 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:15.098 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:15.098 "nvmf_tgt_1" 00:12:15.098 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:15.358 "nvmf_tgt_2" 00:12:15.358 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.358 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:15.358 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:15.358 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:15.358 true 00:12:15.358 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:15.616 true 00:12:15.616 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.616 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:15.616 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:15.616 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:15.616 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:15.617 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.617 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:15.617 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.617 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:15.617 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.617 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.877 rmmod nvme_tcp 00:12:15.877 rmmod nvme_fabrics 00:12:15.877 rmmod nvme_keyring 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 75575 ']' 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 75575 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 75575 ']' 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 75575 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75575 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:15.877 killing process with pid 75575 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75575' 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 75575 00:12:15.877 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 75575 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:16.138 00:12:16.138 real 0m2.752s 00:12:16.138 user 0m8.398s 00:12:16.138 sys 0m0.783s 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:16.138 ************************************ 00:12:16.138 END TEST nvmf_multitarget 00:12:16.138 ************************************ 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.138 ************************************ 00:12:16.138 START TEST nvmf_rpc 00:12:16.138 ************************************ 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:16.138 * Looking for test storage... 00:12:16.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.138 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:16.404 Cannot find device "nvmf_tgt_br" 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.404 Cannot find device "nvmf_tgt_br2" 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:16.404 Cannot find device "nvmf_tgt_br" 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:16.404 Cannot find device "nvmf_tgt_br2" 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.404 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.404 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:16.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:12:16.664 00:12:16.664 --- 10.0.0.2 ping statistics --- 00:12:16.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.664 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:16.664 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.664 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:16.664 00:12:16.664 --- 10.0.0.3 ping statistics --- 00:12:16.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.664 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:16.664 00:12:16.664 --- 10.0.0.1 ping statistics --- 00:12:16.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.664 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=75798 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 75798 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 75798 ']' 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.664 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.664 [2024-07-25 07:26:49.258780] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:16.664 [2024-07-25 07:26:49.258849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.923 [2024-07-25 07:26:49.398828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.923 [2024-07-25 07:26:49.498230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.923 [2024-07-25 07:26:49.498280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.923 [2024-07-25 07:26:49.498287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.923 [2024-07-25 07:26:49.498292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.923 [2024-07-25 07:26:49.498297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.923 [2024-07-25 07:26:49.499263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.923 [2024-07-25 07:26:49.499334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.923 [2024-07-25 07:26:49.499434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.923 [2024-07-25 07:26:49.499437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:17.491 "poll_groups": [ 00:12:17.491 { 00:12:17.491 "admin_qpairs": 0, 00:12:17.491 "completed_nvme_io": 0, 00:12:17.491 "current_admin_qpairs": 0, 00:12:17.491 "current_io_qpairs": 0, 00:12:17.491 "io_qpairs": 0, 00:12:17.491 "name": "nvmf_tgt_poll_group_000", 00:12:17.491 "pending_bdev_io": 0, 00:12:17.491 "transports": [] 00:12:17.491 }, 00:12:17.491 { 00:12:17.491 "admin_qpairs": 0, 00:12:17.491 "completed_nvme_io": 0, 00:12:17.491 "current_admin_qpairs": 0, 00:12:17.491 "current_io_qpairs": 0, 00:12:17.491 "io_qpairs": 0, 00:12:17.491 "name": "nvmf_tgt_poll_group_001", 00:12:17.491 "pending_bdev_io": 0, 00:12:17.491 "transports": [] 00:12:17.491 }, 00:12:17.491 { 00:12:17.491 "admin_qpairs": 0, 00:12:17.491 "completed_nvme_io": 0, 00:12:17.491 "current_admin_qpairs": 0, 00:12:17.491 "current_io_qpairs": 0, 00:12:17.491 "io_qpairs": 0, 00:12:17.491 "name": "nvmf_tgt_poll_group_002", 00:12:17.491 "pending_bdev_io": 0, 00:12:17.491 "transports": [] 00:12:17.491 }, 00:12:17.491 { 00:12:17.491 "admin_qpairs": 0, 00:12:17.491 "completed_nvme_io": 0, 00:12:17.491 "current_admin_qpairs": 0, 00:12:17.491 "current_io_qpairs": 0, 00:12:17.491 "io_qpairs": 0, 00:12:17.491 "name": "nvmf_tgt_poll_group_003", 00:12:17.491 "pending_bdev_io": 0, 00:12:17.491 "transports": [] 00:12:17.491 } 00:12:17.491 ], 00:12:17.491 "tick_rate": 2290000000 00:12:17.491 }' 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:17.491 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.751 [2024-07-25 07:26:50.307327] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:17.751 "poll_groups": [ 00:12:17.751 { 00:12:17.751 "admin_qpairs": 0, 00:12:17.751 "completed_nvme_io": 0, 00:12:17.751 "current_admin_qpairs": 0, 00:12:17.751 "current_io_qpairs": 0, 00:12:17.751 "io_qpairs": 0, 00:12:17.751 "name": "nvmf_tgt_poll_group_000", 00:12:17.751 "pending_bdev_io": 0, 00:12:17.751 "transports": [ 00:12:17.751 { 00:12:17.751 "trtype": "TCP" 00:12:17.751 } 00:12:17.751 ] 00:12:17.751 }, 00:12:17.751 { 00:12:17.751 "admin_qpairs": 0, 00:12:17.751 "completed_nvme_io": 0, 00:12:17.751 "current_admin_qpairs": 0, 00:12:17.751 "current_io_qpairs": 0, 00:12:17.751 "io_qpairs": 0, 00:12:17.751 "name": "nvmf_tgt_poll_group_001", 00:12:17.751 "pending_bdev_io": 0, 00:12:17.751 "transports": [ 00:12:17.751 { 00:12:17.751 "trtype": "TCP" 00:12:17.751 } 00:12:17.751 ] 00:12:17.751 }, 00:12:17.751 { 00:12:17.751 "admin_qpairs": 0, 00:12:17.751 "completed_nvme_io": 0, 00:12:17.751 "current_admin_qpairs": 0, 00:12:17.751 "current_io_qpairs": 0, 00:12:17.751 "io_qpairs": 0, 00:12:17.751 "name": "nvmf_tgt_poll_group_002", 00:12:17.751 "pending_bdev_io": 0, 00:12:17.751 "transports": [ 00:12:17.751 { 00:12:17.751 "trtype": "TCP" 00:12:17.751 } 00:12:17.751 ] 00:12:17.751 }, 00:12:17.751 { 00:12:17.751 "admin_qpairs": 0, 00:12:17.751 "completed_nvme_io": 0, 00:12:17.751 "current_admin_qpairs": 0, 00:12:17.751 "current_io_qpairs": 0, 00:12:17.751 "io_qpairs": 0, 00:12:17.751 "name": "nvmf_tgt_poll_group_003", 00:12:17.751 "pending_bdev_io": 0, 00:12:17.751 "transports": [ 00:12:17.751 { 00:12:17.751 "trtype": "TCP" 00:12:17.751 } 00:12:17.751 ] 00:12:17.751 } 00:12:17.751 ], 00:12:17.751 "tick_rate": 2290000000 00:12:17.751 }' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.751 Malloc1 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.751 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.010 [2024-07-25 07:26:50.522318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -a 10.0.0.2 -s 4420 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -a 10.0.0.2 -s 4420 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -a 10.0.0.2 -s 4420 00:12:18.010 [2024-07-25 07:26:50.558530] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b' 00:12:18.010 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.010 could not add new controller: failed to write to nvme-fabrics device 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.010 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.270 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.270 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:18.270 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.270 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:18.270 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.174 [2024-07-25 07:26:52.885072] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b' 00:12:20.174 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:20.174 could not add new controller: failed to write to nvme-fabrics device 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.174 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.433 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.433 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:20.433 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.433 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:20.433 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:22.364 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:22.364 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:22.364 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.623 [2024-07-25 07:26:55.199720] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.623 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.882 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.882 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:22.882 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.882 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:22.882 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.817 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 [2024-07-25 07:26:57.550910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:25.074 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:27.601 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 [2024-07-25 07:26:59.890275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.602 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.602 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.602 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:27.602 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.602 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:27.602 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.522 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.523 [2024-07-25 07:27:02.241306] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.523 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:29.781 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.313 [2024-07-25 07:27:04.564771] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:32.313 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.224 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.224 [2024-07-25 07:27:06.911572] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.225 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 [2024-07-25 07:27:06.983519] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.489 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 [2024-07-25 07:27:07.047453] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 [2024-07-25 07:27:07.119399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 [2024-07-25 07:27:07.191317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.490 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:34.749 "poll_groups": [ 00:12:34.749 { 00:12:34.749 "admin_qpairs": 2, 00:12:34.749 "completed_nvme_io": 67, 00:12:34.749 "current_admin_qpairs": 0, 00:12:34.749 "current_io_qpairs": 0, 00:12:34.749 "io_qpairs": 16, 00:12:34.749 "name": "nvmf_tgt_poll_group_000", 00:12:34.749 "pending_bdev_io": 0, 00:12:34.749 "transports": [ 00:12:34.749 { 00:12:34.749 "trtype": "TCP" 00:12:34.749 } 00:12:34.749 ] 00:12:34.749 }, 00:12:34.749 { 00:12:34.749 "admin_qpairs": 3, 00:12:34.749 "completed_nvme_io": 115, 00:12:34.749 "current_admin_qpairs": 0, 00:12:34.749 "current_io_qpairs": 0, 00:12:34.749 "io_qpairs": 17, 00:12:34.749 "name": "nvmf_tgt_poll_group_001", 00:12:34.749 "pending_bdev_io": 0, 00:12:34.749 "transports": [ 00:12:34.749 { 00:12:34.749 "trtype": "TCP" 00:12:34.749 } 00:12:34.749 ] 00:12:34.749 }, 00:12:34.749 { 00:12:34.749 "admin_qpairs": 1, 00:12:34.749 "completed_nvme_io": 169, 00:12:34.749 "current_admin_qpairs": 0, 00:12:34.749 "current_io_qpairs": 0, 00:12:34.749 "io_qpairs": 19, 00:12:34.749 "name": "nvmf_tgt_poll_group_002", 00:12:34.749 "pending_bdev_io": 0, 00:12:34.749 "transports": [ 00:12:34.749 { 00:12:34.749 "trtype": "TCP" 00:12:34.749 } 00:12:34.749 ] 00:12:34.749 }, 00:12:34.749 { 00:12:34.749 "admin_qpairs": 1, 00:12:34.749 "completed_nvme_io": 69, 00:12:34.749 "current_admin_qpairs": 0, 00:12:34.749 "current_io_qpairs": 0, 00:12:34.749 "io_qpairs": 18, 00:12:34.749 "name": "nvmf_tgt_poll_group_003", 00:12:34.749 "pending_bdev_io": 0, 00:12:34.749 "transports": [ 00:12:34.749 { 00:12:34.749 "trtype": "TCP" 00:12:34.749 } 00:12:34.749 ] 00:12:34.749 } 00:12:34.749 ], 00:12:34.749 "tick_rate": 2290000000 00:12:34.749 }' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.749 rmmod nvme_tcp 00:12:34.749 rmmod nvme_fabrics 00:12:34.749 rmmod nvme_keyring 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 75798 ']' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 75798 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 75798 ']' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 75798 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75798 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75798' 00:12:34.749 killing process with pid 75798 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 75798 00:12:34.749 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 75798 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.008 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:35.268 00:12:35.268 real 0m19.033s 00:12:35.268 user 1m12.561s 00:12:35.268 sys 0m2.029s 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.269 ************************************ 00:12:35.269 END TEST nvmf_rpc 00:12:35.269 ************************************ 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.269 ************************************ 00:12:35.269 START TEST nvmf_invalid 00:12:35.269 ************************************ 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:35.269 * Looking for test storage... 00:12:35.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:35.269 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:35.528 Cannot find device "nvmf_tgt_br" 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.528 Cannot find device "nvmf_tgt_br2" 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:35.528 Cannot find device "nvmf_tgt_br" 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:35.528 Cannot find device "nvmf_tgt_br2" 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:12:35.528 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.529 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:35.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:12:35.812 00:12:35.812 --- 10.0.0.2 ping statistics --- 00:12:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.812 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:35.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:12:35.812 00:12:35.812 --- 10.0.0.3 ping statistics --- 00:12:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.812 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:12:35.812 00:12:35.812 --- 10.0.0.1 ping statistics --- 00:12:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.812 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=76317 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 76317 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 76317 ']' 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.812 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:35.812 [2024-07-25 07:27:08.500091] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:35.812 [2024-07-25 07:27:08.500272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.085 [2024-07-25 07:27:08.644351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.085 [2024-07-25 07:27:08.744084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.085 [2024-07-25 07:27:08.744240] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.085 [2024-07-25 07:27:08.744280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.085 [2024-07-25 07:27:08.744309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.085 [2024-07-25 07:27:08.744326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.085 [2024-07-25 07:27:08.744470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.085 [2024-07-25 07:27:08.744622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.085 [2024-07-25 07:27:08.745240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.085 [2024-07-25 07:27:08.745241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.022 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25314 00:12:37.023 [2024-07-25 07:27:09.675433] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:37.023 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/25 07:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25314 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:37.023 request: 00:12:37.023 { 00:12:37.023 "method": "nvmf_create_subsystem", 00:12:37.023 "params": { 00:12:37.023 "nqn": "nqn.2016-06.io.spdk:cnode25314", 00:12:37.023 "tgt_name": "foobar" 00:12:37.023 } 00:12:37.023 } 00:12:37.023 Got JSON-RPC error response 00:12:37.023 GoRPCClient: error on JSON-RPC call' 00:12:37.023 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/25 07:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25314 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:37.023 request: 00:12:37.023 { 00:12:37.023 "method": "nvmf_create_subsystem", 00:12:37.023 "params": { 00:12:37.023 "nqn": "nqn.2016-06.io.spdk:cnode25314", 00:12:37.023 "tgt_name": "foobar" 00:12:37.023 } 00:12:37.023 } 00:12:37.023 Got JSON-RPC error response 00:12:37.023 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:37.023 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:37.023 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23993 00:12:37.282 [2024-07-25 07:27:09.911253] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23993: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:37.282 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/25 07:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23993 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:37.282 request: 00:12:37.282 { 00:12:37.282 "method": "nvmf_create_subsystem", 00:12:37.282 "params": { 00:12:37.282 "nqn": "nqn.2016-06.io.spdk:cnode23993", 00:12:37.282 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:37.282 } 00:12:37.282 } 00:12:37.282 Got JSON-RPC error response 00:12:37.282 GoRPCClient: error on JSON-RPC call' 00:12:37.282 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/25 07:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23993 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:37.282 request: 00:12:37.282 { 00:12:37.282 "method": "nvmf_create_subsystem", 00:12:37.282 "params": { 00:12:37.282 "nqn": "nqn.2016-06.io.spdk:cnode23993", 00:12:37.282 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:37.282 } 00:12:37.282 } 00:12:37.282 Got JSON-RPC error response 00:12:37.282 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:37.282 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:37.282 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10519 00:12:37.542 [2024-07-25 07:27:10.143165] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10519: invalid model number 'SPDK_Controller' 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/25 07:27:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10519], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:37.542 request: 00:12:37.542 { 00:12:37.542 "method": "nvmf_create_subsystem", 00:12:37.542 "params": { 00:12:37.542 "nqn": "nqn.2016-06.io.spdk:cnode10519", 00:12:37.542 "model_number": "SPDK_Controller\u001f" 00:12:37.542 } 00:12:37.542 } 00:12:37.542 Got JSON-RPC error response 00:12:37.542 GoRPCClient: error on JSON-RPC call' 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/25 07:27:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10519], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:37.542 request: 00:12:37.542 { 00:12:37.542 "method": "nvmf_create_subsystem", 00:12:37.542 "params": { 00:12:37.542 "nqn": "nqn.2016-06.io.spdk:cnode10519", 00:12:37.542 "model_number": "SPDK_Controller\u001f" 00:12:37.542 } 00:12:37.542 } 00:12:37.542 Got JSON-RPC error response 00:12:37.542 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:37.542 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:37.543 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:12:37.803 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wIx/hDl~w~\9j\A4# /dev/null' 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:40.914 00:12:40.914 real 0m5.642s 00:12:40.914 user 0m21.545s 00:12:40.914 sys 0m1.475s 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:40.914 ************************************ 00:12:40.914 END TEST nvmf_invalid 00:12:40.914 ************************************ 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.914 ************************************ 00:12:40.914 START TEST nvmf_connect_stress 00:12:40.914 ************************************ 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:40.914 * Looking for test storage... 00:12:40.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:40.914 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:41.173 Cannot find device "nvmf_tgt_br" 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.173 Cannot find device "nvmf_tgt_br2" 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:41.173 Cannot find device "nvmf_tgt_br" 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:41.173 Cannot find device "nvmf_tgt_br2" 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.173 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.174 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.174 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:41.433 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:41.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:12:41.433 00:12:41.433 --- 10.0.0.2 ping statistics --- 00:12:41.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.433 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:41.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:41.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:12:41.433 00:12:41.433 --- 10.0.0.3 ping statistics --- 00:12:41.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.433 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:41.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:12:41.433 00:12:41.433 --- 10.0.0.1 ping statistics --- 00:12:41.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.433 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=76816 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 76816 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 76816 ']' 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.433 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.702 [2024-07-25 07:27:14.180699] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:41.702 [2024-07-25 07:27:14.180776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.702 [2024-07-25 07:27:14.317307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:41.702 [2024-07-25 07:27:14.423510] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.702 [2024-07-25 07:27:14.423910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.702 [2024-07-25 07:27:14.424032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.702 [2024-07-25 07:27:14.424134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.702 [2024-07-25 07:27:14.424225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.702 [2024-07-25 07:27:14.424418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.702 [2024-07-25 07:27:14.424529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.702 [2024-07-25 07:27:14.424528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.637 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.637 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.638 [2024-07-25 07:27:15.141377] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.638 [2024-07-25 07:27:15.158360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.638 NULL1 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=76869 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.638 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.897 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.897 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:42.897 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.897 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.897 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.465 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.465 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:43.465 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.465 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.465 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.726 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.726 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:43.726 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.726 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.726 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.991 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.991 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:43.991 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.991 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.991 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.253 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.253 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:44.253 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.253 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.253 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.822 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.822 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:44.822 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.822 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.822 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.082 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.082 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:45.082 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.082 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.082 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.342 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.342 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:45.342 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.342 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.342 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.602 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.602 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:45.602 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.602 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.602 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.861 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.861 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:45.861 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.861 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.861 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.430 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.430 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:46.430 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.430 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.430 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.689 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.689 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:46.689 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.689 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.689 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.948 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.948 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:46.948 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.948 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.948 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.208 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.208 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:47.208 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.208 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.208 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.467 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.467 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:47.467 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.467 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.467 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.036 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.036 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:48.036 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.036 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.036 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.294 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.294 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:48.294 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.294 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.294 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.553 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.553 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:48.553 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.553 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.553 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.812 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.812 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:48.812 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.812 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.812 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.072 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.072 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:49.072 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.072 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.072 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.640 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.640 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:49.640 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.640 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.640 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.899 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.899 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:49.899 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.899 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.899 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.157 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.157 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:50.157 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.157 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.157 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.414 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.414 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:50.414 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.414 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.414 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.978 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.978 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:50.978 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.978 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.978 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.236 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.236 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:51.236 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.236 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.236 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.494 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.494 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:51.494 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.494 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.494 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.764 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.764 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:51.764 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.764 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.764 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.037 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.037 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:52.037 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.037 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.037 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.603 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.603 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:52.603 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.603 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.603 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.864 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76869 00:12:52.864 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (76869) - No such process 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 76869 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.864 rmmod nvme_tcp 00:12:52.864 rmmod nvme_fabrics 00:12:52.864 rmmod nvme_keyring 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 76816 ']' 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 76816 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 76816 ']' 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 76816 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76816 00:12:52.864 killing process with pid 76816 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76816' 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 76816 00:12:52.864 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 76816 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:53.123 00:12:53.123 real 0m12.256s 00:12:53.123 user 0m41.049s 00:12:53.123 sys 0m2.991s 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.123 ************************************ 00:12:53.123 END TEST nvmf_connect_stress 00:12:53.123 ************************************ 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.123 ************************************ 00:12:53.123 START TEST nvmf_fused_ordering 00:12:53.123 ************************************ 00:12:53.123 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:53.383 * Looking for test storage... 00:12:53.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.383 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.384 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:53.384 Cannot find device "nvmf_tgt_br" 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.384 Cannot find device "nvmf_tgt_br2" 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:53.384 Cannot find device "nvmf_tgt_br" 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:53.384 Cannot find device "nvmf_tgt_br2" 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:12:53.384 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:53.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:12:53.644 00:12:53.644 --- 10.0.0.2 ping statistics --- 00:12:53.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.644 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:53.644 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:53.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:53.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:12:53.644 00:12:53.644 --- 10.0.0.3 ping statistics --- 00:12:53.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.645 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:53.645 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:53.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:12:53.905 00:12:53.905 --- 10.0.0.1 ping statistics --- 00:12:53.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.905 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=77200 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 77200 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 77200 ']' 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.905 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:53.905 [2024-07-25 07:27:26.465240] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:53.905 [2024-07-25 07:27:26.465319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.905 [2024-07-25 07:27:26.608201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.164 [2024-07-25 07:27:26.711575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.164 [2024-07-25 07:27:26.711619] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.164 [2024-07-25 07:27:26.711627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.164 [2024-07-25 07:27:26.711632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.164 [2024-07-25 07:27:26.711637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.164 [2024-07-25 07:27:26.711663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 [2024-07-25 07:27:27.393595] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 [2024-07-25 07:27:27.409640] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 NULL1 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.732 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:54.993 [2024-07-25 07:27:27.470761] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:54.993 [2024-07-25 07:27:27.470799] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77252 ] 00:12:55.254 Attached to nqn.2016-06.io.spdk:cnode1 00:12:55.254 Namespace ID: 1 size: 1GB 00:12:55.254 fused_ordering(0) 00:12:55.254 fused_ordering(1) 00:12:55.254 fused_ordering(2) 00:12:55.254 fused_ordering(3) 00:12:55.254 fused_ordering(4) 00:12:55.254 fused_ordering(5) 00:12:55.254 fused_ordering(6) 00:12:55.254 fused_ordering(7) 00:12:55.254 fused_ordering(8) 00:12:55.254 fused_ordering(9) 00:12:55.254 fused_ordering(10) 00:12:55.254 fused_ordering(11) 00:12:55.254 fused_ordering(12) 00:12:55.254 fused_ordering(13) 00:12:55.254 fused_ordering(14) 00:12:55.254 fused_ordering(15) 00:12:55.254 fused_ordering(16) 00:12:55.254 fused_ordering(17) 00:12:55.254 fused_ordering(18) 00:12:55.254 fused_ordering(19) 00:12:55.254 fused_ordering(20) 00:12:55.254 fused_ordering(21) 00:12:55.254 fused_ordering(22) 00:12:55.254 fused_ordering(23) 00:12:55.254 fused_ordering(24) 00:12:55.254 fused_ordering(25) 00:12:55.254 fused_ordering(26) 00:12:55.254 fused_ordering(27) 00:12:55.254 fused_ordering(28) 00:12:55.254 fused_ordering(29) 00:12:55.254 fused_ordering(30) 00:12:55.254 fused_ordering(31) 00:12:55.254 fused_ordering(32) 00:12:55.254 fused_ordering(33) 00:12:55.254 fused_ordering(34) 00:12:55.254 fused_ordering(35) 00:12:55.254 fused_ordering(36) 00:12:55.254 fused_ordering(37) 00:12:55.254 fused_ordering(38) 00:12:55.254 fused_ordering(39) 00:12:55.254 fused_ordering(40) 00:12:55.254 fused_ordering(41) 00:12:55.254 fused_ordering(42) 00:12:55.254 fused_ordering(43) 00:12:55.254 fused_ordering(44) 00:12:55.254 fused_ordering(45) 00:12:55.254 fused_ordering(46) 00:12:55.254 fused_ordering(47) 00:12:55.254 fused_ordering(48) 00:12:55.254 fused_ordering(49) 00:12:55.254 fused_ordering(50) 00:12:55.254 fused_ordering(51) 00:12:55.254 fused_ordering(52) 00:12:55.254 fused_ordering(53) 00:12:55.254 fused_ordering(54) 00:12:55.254 fused_ordering(55) 00:12:55.254 fused_ordering(56) 00:12:55.254 fused_ordering(57) 00:12:55.254 fused_ordering(58) 00:12:55.254 fused_ordering(59) 00:12:55.254 fused_ordering(60) 00:12:55.254 fused_ordering(61) 00:12:55.254 fused_ordering(62) 00:12:55.254 fused_ordering(63) 00:12:55.254 fused_ordering(64) 00:12:55.254 fused_ordering(65) 00:12:55.254 fused_ordering(66) 00:12:55.254 fused_ordering(67) 00:12:55.254 fused_ordering(68) 00:12:55.254 fused_ordering(69) 00:12:55.254 fused_ordering(70) 00:12:55.254 fused_ordering(71) 00:12:55.254 fused_ordering(72) 00:12:55.254 fused_ordering(73) 00:12:55.254 fused_ordering(74) 00:12:55.254 fused_ordering(75) 00:12:55.254 fused_ordering(76) 00:12:55.254 fused_ordering(77) 00:12:55.254 fused_ordering(78) 00:12:55.254 fused_ordering(79) 00:12:55.254 fused_ordering(80) 00:12:55.254 fused_ordering(81) 00:12:55.254 fused_ordering(82) 00:12:55.254 fused_ordering(83) 00:12:55.254 fused_ordering(84) 00:12:55.254 fused_ordering(85) 00:12:55.254 fused_ordering(86) 00:12:55.254 fused_ordering(87) 00:12:55.254 fused_ordering(88) 00:12:55.254 fused_ordering(89) 00:12:55.254 fused_ordering(90) 00:12:55.254 fused_ordering(91) 00:12:55.254 fused_ordering(92) 00:12:55.254 fused_ordering(93) 00:12:55.254 fused_ordering(94) 00:12:55.254 fused_ordering(95) 00:12:55.254 fused_ordering(96) 00:12:55.254 fused_ordering(97) 00:12:55.254 fused_ordering(98) 00:12:55.254 fused_ordering(99) 00:12:55.254 fused_ordering(100) 00:12:55.254 fused_ordering(101) 00:12:55.254 fused_ordering(102) 00:12:55.254 fused_ordering(103) 00:12:55.254 fused_ordering(104) 00:12:55.254 fused_ordering(105) 00:12:55.254 fused_ordering(106) 00:12:55.254 fused_ordering(107) 00:12:55.254 fused_ordering(108) 00:12:55.254 fused_ordering(109) 00:12:55.254 fused_ordering(110) 00:12:55.254 fused_ordering(111) 00:12:55.254 fused_ordering(112) 00:12:55.254 fused_ordering(113) 00:12:55.254 fused_ordering(114) 00:12:55.254 fused_ordering(115) 00:12:55.254 fused_ordering(116) 00:12:55.254 fused_ordering(117) 00:12:55.254 fused_ordering(118) 00:12:55.254 fused_ordering(119) 00:12:55.254 fused_ordering(120) 00:12:55.254 fused_ordering(121) 00:12:55.254 fused_ordering(122) 00:12:55.254 fused_ordering(123) 00:12:55.254 fused_ordering(124) 00:12:55.254 fused_ordering(125) 00:12:55.254 fused_ordering(126) 00:12:55.254 fused_ordering(127) 00:12:55.254 fused_ordering(128) 00:12:55.254 fused_ordering(129) 00:12:55.254 fused_ordering(130) 00:12:55.254 fused_ordering(131) 00:12:55.254 fused_ordering(132) 00:12:55.254 fused_ordering(133) 00:12:55.254 fused_ordering(134) 00:12:55.254 fused_ordering(135) 00:12:55.254 fused_ordering(136) 00:12:55.254 fused_ordering(137) 00:12:55.254 fused_ordering(138) 00:12:55.254 fused_ordering(139) 00:12:55.254 fused_ordering(140) 00:12:55.254 fused_ordering(141) 00:12:55.254 fused_ordering(142) 00:12:55.254 fused_ordering(143) 00:12:55.254 fused_ordering(144) 00:12:55.254 fused_ordering(145) 00:12:55.254 fused_ordering(146) 00:12:55.254 fused_ordering(147) 00:12:55.254 fused_ordering(148) 00:12:55.254 fused_ordering(149) 00:12:55.254 fused_ordering(150) 00:12:55.254 fused_ordering(151) 00:12:55.254 fused_ordering(152) 00:12:55.254 fused_ordering(153) 00:12:55.254 fused_ordering(154) 00:12:55.254 fused_ordering(155) 00:12:55.254 fused_ordering(156) 00:12:55.254 fused_ordering(157) 00:12:55.254 fused_ordering(158) 00:12:55.254 fused_ordering(159) 00:12:55.254 fused_ordering(160) 00:12:55.254 fused_ordering(161) 00:12:55.254 fused_ordering(162) 00:12:55.254 fused_ordering(163) 00:12:55.254 fused_ordering(164) 00:12:55.254 fused_ordering(165) 00:12:55.254 fused_ordering(166) 00:12:55.254 fused_ordering(167) 00:12:55.254 fused_ordering(168) 00:12:55.254 fused_ordering(169) 00:12:55.254 fused_ordering(170) 00:12:55.254 fused_ordering(171) 00:12:55.254 fused_ordering(172) 00:12:55.254 fused_ordering(173) 00:12:55.254 fused_ordering(174) 00:12:55.254 fused_ordering(175) 00:12:55.254 fused_ordering(176) 00:12:55.254 fused_ordering(177) 00:12:55.254 fused_ordering(178) 00:12:55.254 fused_ordering(179) 00:12:55.254 fused_ordering(180) 00:12:55.254 fused_ordering(181) 00:12:55.254 fused_ordering(182) 00:12:55.254 fused_ordering(183) 00:12:55.254 fused_ordering(184) 00:12:55.254 fused_ordering(185) 00:12:55.254 fused_ordering(186) 00:12:55.254 fused_ordering(187) 00:12:55.254 fused_ordering(188) 00:12:55.254 fused_ordering(189) 00:12:55.254 fused_ordering(190) 00:12:55.254 fused_ordering(191) 00:12:55.254 fused_ordering(192) 00:12:55.254 fused_ordering(193) 00:12:55.254 fused_ordering(194) 00:12:55.254 fused_ordering(195) 00:12:55.254 fused_ordering(196) 00:12:55.254 fused_ordering(197) 00:12:55.254 fused_ordering(198) 00:12:55.254 fused_ordering(199) 00:12:55.254 fused_ordering(200) 00:12:55.254 fused_ordering(201) 00:12:55.254 fused_ordering(202) 00:12:55.254 fused_ordering(203) 00:12:55.254 fused_ordering(204) 00:12:55.254 fused_ordering(205) 00:12:55.517 fused_ordering(206) 00:12:55.517 fused_ordering(207) 00:12:55.517 fused_ordering(208) 00:12:55.517 fused_ordering(209) 00:12:55.517 fused_ordering(210) 00:12:55.517 fused_ordering(211) 00:12:55.517 fused_ordering(212) 00:12:55.517 fused_ordering(213) 00:12:55.517 fused_ordering(214) 00:12:55.517 fused_ordering(215) 00:12:55.517 fused_ordering(216) 00:12:55.517 fused_ordering(217) 00:12:55.517 fused_ordering(218) 00:12:55.517 fused_ordering(219) 00:12:55.517 fused_ordering(220) 00:12:55.517 fused_ordering(221) 00:12:55.517 fused_ordering(222) 00:12:55.517 fused_ordering(223) 00:12:55.517 fused_ordering(224) 00:12:55.517 fused_ordering(225) 00:12:55.517 fused_ordering(226) 00:12:55.517 fused_ordering(227) 00:12:55.517 fused_ordering(228) 00:12:55.517 fused_ordering(229) 00:12:55.517 fused_ordering(230) 00:12:55.517 fused_ordering(231) 00:12:55.517 fused_ordering(232) 00:12:55.517 fused_ordering(233) 00:12:55.517 fused_ordering(234) 00:12:55.517 fused_ordering(235) 00:12:55.517 fused_ordering(236) 00:12:55.517 fused_ordering(237) 00:12:55.517 fused_ordering(238) 00:12:55.517 fused_ordering(239) 00:12:55.517 fused_ordering(240) 00:12:55.517 fused_ordering(241) 00:12:55.517 fused_ordering(242) 00:12:55.517 fused_ordering(243) 00:12:55.517 fused_ordering(244) 00:12:55.517 fused_ordering(245) 00:12:55.517 fused_ordering(246) 00:12:55.517 fused_ordering(247) 00:12:55.517 fused_ordering(248) 00:12:55.517 fused_ordering(249) 00:12:55.517 fused_ordering(250) 00:12:55.517 fused_ordering(251) 00:12:55.517 fused_ordering(252) 00:12:55.517 fused_ordering(253) 00:12:55.517 fused_ordering(254) 00:12:55.517 fused_ordering(255) 00:12:55.517 fused_ordering(256) 00:12:55.517 fused_ordering(257) 00:12:55.517 fused_ordering(258) 00:12:55.517 fused_ordering(259) 00:12:55.517 fused_ordering(260) 00:12:55.517 fused_ordering(261) 00:12:55.517 fused_ordering(262) 00:12:55.517 fused_ordering(263) 00:12:55.517 fused_ordering(264) 00:12:55.517 fused_ordering(265) 00:12:55.517 fused_ordering(266) 00:12:55.517 fused_ordering(267) 00:12:55.517 fused_ordering(268) 00:12:55.517 fused_ordering(269) 00:12:55.517 fused_ordering(270) 00:12:55.517 fused_ordering(271) 00:12:55.517 fused_ordering(272) 00:12:55.517 fused_ordering(273) 00:12:55.517 fused_ordering(274) 00:12:55.517 fused_ordering(275) 00:12:55.517 fused_ordering(276) 00:12:55.517 fused_ordering(277) 00:12:55.517 fused_ordering(278) 00:12:55.517 fused_ordering(279) 00:12:55.517 fused_ordering(280) 00:12:55.517 fused_ordering(281) 00:12:55.517 fused_ordering(282) 00:12:55.517 fused_ordering(283) 00:12:55.517 fused_ordering(284) 00:12:55.517 fused_ordering(285) 00:12:55.517 fused_ordering(286) 00:12:55.517 fused_ordering(287) 00:12:55.517 fused_ordering(288) 00:12:55.517 fused_ordering(289) 00:12:55.517 fused_ordering(290) 00:12:55.517 fused_ordering(291) 00:12:55.517 fused_ordering(292) 00:12:55.517 fused_ordering(293) 00:12:55.517 fused_ordering(294) 00:12:55.517 fused_ordering(295) 00:12:55.517 fused_ordering(296) 00:12:55.517 fused_ordering(297) 00:12:55.517 fused_ordering(298) 00:12:55.517 fused_ordering(299) 00:12:55.517 fused_ordering(300) 00:12:55.517 fused_ordering(301) 00:12:55.517 fused_ordering(302) 00:12:55.517 fused_ordering(303) 00:12:55.517 fused_ordering(304) 00:12:55.517 fused_ordering(305) 00:12:55.517 fused_ordering(306) 00:12:55.517 fused_ordering(307) 00:12:55.517 fused_ordering(308) 00:12:55.517 fused_ordering(309) 00:12:55.517 fused_ordering(310) 00:12:55.517 fused_ordering(311) 00:12:55.517 fused_ordering(312) 00:12:55.517 fused_ordering(313) 00:12:55.517 fused_ordering(314) 00:12:55.517 fused_ordering(315) 00:12:55.517 fused_ordering(316) 00:12:55.517 fused_ordering(317) 00:12:55.517 fused_ordering(318) 00:12:55.517 fused_ordering(319) 00:12:55.517 fused_ordering(320) 00:12:55.517 fused_ordering(321) 00:12:55.517 fused_ordering(322) 00:12:55.517 fused_ordering(323) 00:12:55.517 fused_ordering(324) 00:12:55.517 fused_ordering(325) 00:12:55.517 fused_ordering(326) 00:12:55.517 fused_ordering(327) 00:12:55.517 fused_ordering(328) 00:12:55.517 fused_ordering(329) 00:12:55.517 fused_ordering(330) 00:12:55.517 fused_ordering(331) 00:12:55.517 fused_ordering(332) 00:12:55.517 fused_ordering(333) 00:12:55.517 fused_ordering(334) 00:12:55.517 fused_ordering(335) 00:12:55.517 fused_ordering(336) 00:12:55.517 fused_ordering(337) 00:12:55.517 fused_ordering(338) 00:12:55.517 fused_ordering(339) 00:12:55.517 fused_ordering(340) 00:12:55.517 fused_ordering(341) 00:12:55.517 fused_ordering(342) 00:12:55.517 fused_ordering(343) 00:12:55.517 fused_ordering(344) 00:12:55.517 fused_ordering(345) 00:12:55.517 fused_ordering(346) 00:12:55.517 fused_ordering(347) 00:12:55.517 fused_ordering(348) 00:12:55.517 fused_ordering(349) 00:12:55.517 fused_ordering(350) 00:12:55.517 fused_ordering(351) 00:12:55.517 fused_ordering(352) 00:12:55.517 fused_ordering(353) 00:12:55.517 fused_ordering(354) 00:12:55.517 fused_ordering(355) 00:12:55.517 fused_ordering(356) 00:12:55.517 fused_ordering(357) 00:12:55.517 fused_ordering(358) 00:12:55.517 fused_ordering(359) 00:12:55.517 fused_ordering(360) 00:12:55.517 fused_ordering(361) 00:12:55.517 fused_ordering(362) 00:12:55.517 fused_ordering(363) 00:12:55.517 fused_ordering(364) 00:12:55.517 fused_ordering(365) 00:12:55.517 fused_ordering(366) 00:12:55.517 fused_ordering(367) 00:12:55.517 fused_ordering(368) 00:12:55.517 fused_ordering(369) 00:12:55.517 fused_ordering(370) 00:12:55.517 fused_ordering(371) 00:12:55.517 fused_ordering(372) 00:12:55.517 fused_ordering(373) 00:12:55.517 fused_ordering(374) 00:12:55.517 fused_ordering(375) 00:12:55.517 fused_ordering(376) 00:12:55.517 fused_ordering(377) 00:12:55.517 fused_ordering(378) 00:12:55.517 fused_ordering(379) 00:12:55.517 fused_ordering(380) 00:12:55.517 fused_ordering(381) 00:12:55.517 fused_ordering(382) 00:12:55.517 fused_ordering(383) 00:12:55.517 fused_ordering(384) 00:12:55.517 fused_ordering(385) 00:12:55.517 fused_ordering(386) 00:12:55.517 fused_ordering(387) 00:12:55.517 fused_ordering(388) 00:12:55.517 fused_ordering(389) 00:12:55.517 fused_ordering(390) 00:12:55.518 fused_ordering(391) 00:12:55.518 fused_ordering(392) 00:12:55.518 fused_ordering(393) 00:12:55.518 fused_ordering(394) 00:12:55.518 fused_ordering(395) 00:12:55.518 fused_ordering(396) 00:12:55.518 fused_ordering(397) 00:12:55.518 fused_ordering(398) 00:12:55.518 fused_ordering(399) 00:12:55.518 fused_ordering(400) 00:12:55.518 fused_ordering(401) 00:12:55.518 fused_ordering(402) 00:12:55.518 fused_ordering(403) 00:12:55.518 fused_ordering(404) 00:12:55.518 fused_ordering(405) 00:12:55.518 fused_ordering(406) 00:12:55.518 fused_ordering(407) 00:12:55.518 fused_ordering(408) 00:12:55.518 fused_ordering(409) 00:12:55.518 fused_ordering(410) 00:12:55.778 fused_ordering(411) 00:12:55.778 fused_ordering(412) 00:12:55.778 fused_ordering(413) 00:12:55.778 fused_ordering(414) 00:12:55.778 fused_ordering(415) 00:12:55.778 fused_ordering(416) 00:12:55.778 fused_ordering(417) 00:12:55.778 fused_ordering(418) 00:12:55.778 fused_ordering(419) 00:12:55.778 fused_ordering(420) 00:12:55.778 fused_ordering(421) 00:12:55.778 fused_ordering(422) 00:12:55.778 fused_ordering(423) 00:12:55.778 fused_ordering(424) 00:12:55.778 fused_ordering(425) 00:12:55.778 fused_ordering(426) 00:12:55.778 fused_ordering(427) 00:12:55.778 fused_ordering(428) 00:12:55.778 fused_ordering(429) 00:12:55.778 fused_ordering(430) 00:12:55.778 fused_ordering(431) 00:12:55.778 fused_ordering(432) 00:12:55.778 fused_ordering(433) 00:12:55.778 fused_ordering(434) 00:12:55.778 fused_ordering(435) 00:12:55.778 fused_ordering(436) 00:12:55.778 fused_ordering(437) 00:12:55.778 fused_ordering(438) 00:12:55.778 fused_ordering(439) 00:12:55.778 fused_ordering(440) 00:12:55.778 fused_ordering(441) 00:12:55.778 fused_ordering(442) 00:12:55.778 fused_ordering(443) 00:12:55.778 fused_ordering(444) 00:12:55.778 fused_ordering(445) 00:12:55.778 fused_ordering(446) 00:12:55.778 fused_ordering(447) 00:12:55.778 fused_ordering(448) 00:12:55.778 fused_ordering(449) 00:12:55.778 fused_ordering(450) 00:12:55.778 fused_ordering(451) 00:12:55.778 fused_ordering(452) 00:12:55.778 fused_ordering(453) 00:12:55.778 fused_ordering(454) 00:12:55.778 fused_ordering(455) 00:12:55.778 fused_ordering(456) 00:12:55.778 fused_ordering(457) 00:12:55.778 fused_ordering(458) 00:12:55.778 fused_ordering(459) 00:12:55.778 fused_ordering(460) 00:12:55.778 fused_ordering(461) 00:12:55.778 fused_ordering(462) 00:12:55.778 fused_ordering(463) 00:12:55.778 fused_ordering(464) 00:12:55.778 fused_ordering(465) 00:12:55.778 fused_ordering(466) 00:12:55.778 fused_ordering(467) 00:12:55.778 fused_ordering(468) 00:12:55.778 fused_ordering(469) 00:12:55.778 fused_ordering(470) 00:12:55.778 fused_ordering(471) 00:12:55.778 fused_ordering(472) 00:12:55.778 fused_ordering(473) 00:12:55.778 fused_ordering(474) 00:12:55.778 fused_ordering(475) 00:12:55.778 fused_ordering(476) 00:12:55.778 fused_ordering(477) 00:12:55.778 fused_ordering(478) 00:12:55.778 fused_ordering(479) 00:12:55.778 fused_ordering(480) 00:12:55.778 fused_ordering(481) 00:12:55.778 fused_ordering(482) 00:12:55.778 fused_ordering(483) 00:12:55.778 fused_ordering(484) 00:12:55.778 fused_ordering(485) 00:12:55.778 fused_ordering(486) 00:12:55.778 fused_ordering(487) 00:12:55.778 fused_ordering(488) 00:12:55.778 fused_ordering(489) 00:12:55.778 fused_ordering(490) 00:12:55.778 fused_ordering(491) 00:12:55.778 fused_ordering(492) 00:12:55.778 fused_ordering(493) 00:12:55.778 fused_ordering(494) 00:12:55.778 fused_ordering(495) 00:12:55.778 fused_ordering(496) 00:12:55.778 fused_ordering(497) 00:12:55.778 fused_ordering(498) 00:12:55.778 fused_ordering(499) 00:12:55.778 fused_ordering(500) 00:12:55.778 fused_ordering(501) 00:12:55.778 fused_ordering(502) 00:12:55.778 fused_ordering(503) 00:12:55.778 fused_ordering(504) 00:12:55.778 fused_ordering(505) 00:12:55.778 fused_ordering(506) 00:12:55.778 fused_ordering(507) 00:12:55.778 fused_ordering(508) 00:12:55.778 fused_ordering(509) 00:12:55.778 fused_ordering(510) 00:12:55.778 fused_ordering(511) 00:12:55.778 fused_ordering(512) 00:12:55.778 fused_ordering(513) 00:12:55.778 fused_ordering(514) 00:12:55.778 fused_ordering(515) 00:12:55.778 fused_ordering(516) 00:12:55.778 fused_ordering(517) 00:12:55.778 fused_ordering(518) 00:12:55.778 fused_ordering(519) 00:12:55.778 fused_ordering(520) 00:12:55.778 fused_ordering(521) 00:12:55.779 fused_ordering(522) 00:12:55.779 fused_ordering(523) 00:12:55.779 fused_ordering(524) 00:12:55.779 fused_ordering(525) 00:12:55.779 fused_ordering(526) 00:12:55.779 fused_ordering(527) 00:12:55.779 fused_ordering(528) 00:12:55.779 fused_ordering(529) 00:12:55.779 fused_ordering(530) 00:12:55.779 fused_ordering(531) 00:12:55.779 fused_ordering(532) 00:12:55.779 fused_ordering(533) 00:12:55.779 fused_ordering(534) 00:12:55.779 fused_ordering(535) 00:12:55.779 fused_ordering(536) 00:12:55.779 fused_ordering(537) 00:12:55.779 fused_ordering(538) 00:12:55.779 fused_ordering(539) 00:12:55.779 fused_ordering(540) 00:12:55.779 fused_ordering(541) 00:12:55.779 fused_ordering(542) 00:12:55.779 fused_ordering(543) 00:12:55.779 fused_ordering(544) 00:12:55.779 fused_ordering(545) 00:12:55.779 fused_ordering(546) 00:12:55.779 fused_ordering(547) 00:12:55.779 fused_ordering(548) 00:12:55.779 fused_ordering(549) 00:12:55.779 fused_ordering(550) 00:12:55.779 fused_ordering(551) 00:12:55.779 fused_ordering(552) 00:12:55.779 fused_ordering(553) 00:12:55.779 fused_ordering(554) 00:12:55.779 fused_ordering(555) 00:12:55.779 fused_ordering(556) 00:12:55.779 fused_ordering(557) 00:12:55.779 fused_ordering(558) 00:12:55.779 fused_ordering(559) 00:12:55.779 fused_ordering(560) 00:12:55.779 fused_ordering(561) 00:12:55.779 fused_ordering(562) 00:12:55.779 fused_ordering(563) 00:12:55.779 fused_ordering(564) 00:12:55.779 fused_ordering(565) 00:12:55.779 fused_ordering(566) 00:12:55.779 fused_ordering(567) 00:12:55.779 fused_ordering(568) 00:12:55.779 fused_ordering(569) 00:12:55.779 fused_ordering(570) 00:12:55.779 fused_ordering(571) 00:12:55.779 fused_ordering(572) 00:12:55.779 fused_ordering(573) 00:12:55.779 fused_ordering(574) 00:12:55.779 fused_ordering(575) 00:12:55.779 fused_ordering(576) 00:12:55.779 fused_ordering(577) 00:12:55.779 fused_ordering(578) 00:12:55.779 fused_ordering(579) 00:12:55.779 fused_ordering(580) 00:12:55.779 fused_ordering(581) 00:12:55.779 fused_ordering(582) 00:12:55.779 fused_ordering(583) 00:12:55.779 fused_ordering(584) 00:12:55.779 fused_ordering(585) 00:12:55.779 fused_ordering(586) 00:12:55.779 fused_ordering(587) 00:12:55.779 fused_ordering(588) 00:12:55.779 fused_ordering(589) 00:12:55.779 fused_ordering(590) 00:12:55.779 fused_ordering(591) 00:12:55.779 fused_ordering(592) 00:12:55.779 fused_ordering(593) 00:12:55.779 fused_ordering(594) 00:12:55.779 fused_ordering(595) 00:12:55.779 fused_ordering(596) 00:12:55.779 fused_ordering(597) 00:12:55.779 fused_ordering(598) 00:12:55.779 fused_ordering(599) 00:12:55.779 fused_ordering(600) 00:12:55.779 fused_ordering(601) 00:12:55.779 fused_ordering(602) 00:12:55.779 fused_ordering(603) 00:12:55.779 fused_ordering(604) 00:12:55.779 fused_ordering(605) 00:12:55.779 fused_ordering(606) 00:12:55.779 fused_ordering(607) 00:12:55.779 fused_ordering(608) 00:12:55.779 fused_ordering(609) 00:12:55.779 fused_ordering(610) 00:12:55.779 fused_ordering(611) 00:12:55.779 fused_ordering(612) 00:12:55.779 fused_ordering(613) 00:12:55.779 fused_ordering(614) 00:12:55.779 fused_ordering(615) 00:12:56.039 fused_ordering(616) 00:12:56.039 fused_ordering(617) 00:12:56.039 fused_ordering(618) 00:12:56.039 fused_ordering(619) 00:12:56.039 fused_ordering(620) 00:12:56.039 fused_ordering(621) 00:12:56.039 fused_ordering(622) 00:12:56.039 fused_ordering(623) 00:12:56.039 fused_ordering(624) 00:12:56.039 fused_ordering(625) 00:12:56.039 fused_ordering(626) 00:12:56.039 fused_ordering(627) 00:12:56.039 fused_ordering(628) 00:12:56.039 fused_ordering(629) 00:12:56.039 fused_ordering(630) 00:12:56.039 fused_ordering(631) 00:12:56.039 fused_ordering(632) 00:12:56.039 fused_ordering(633) 00:12:56.039 fused_ordering(634) 00:12:56.039 fused_ordering(635) 00:12:56.039 fused_ordering(636) 00:12:56.039 fused_ordering(637) 00:12:56.039 fused_ordering(638) 00:12:56.039 fused_ordering(639) 00:12:56.039 fused_ordering(640) 00:12:56.039 fused_ordering(641) 00:12:56.039 fused_ordering(642) 00:12:56.039 fused_ordering(643) 00:12:56.039 fused_ordering(644) 00:12:56.039 fused_ordering(645) 00:12:56.039 fused_ordering(646) 00:12:56.039 fused_ordering(647) 00:12:56.039 fused_ordering(648) 00:12:56.039 fused_ordering(649) 00:12:56.039 fused_ordering(650) 00:12:56.039 fused_ordering(651) 00:12:56.039 fused_ordering(652) 00:12:56.039 fused_ordering(653) 00:12:56.039 fused_ordering(654) 00:12:56.039 fused_ordering(655) 00:12:56.039 fused_ordering(656) 00:12:56.039 fused_ordering(657) 00:12:56.039 fused_ordering(658) 00:12:56.039 fused_ordering(659) 00:12:56.039 fused_ordering(660) 00:12:56.039 fused_ordering(661) 00:12:56.039 fused_ordering(662) 00:12:56.039 fused_ordering(663) 00:12:56.039 fused_ordering(664) 00:12:56.039 fused_ordering(665) 00:12:56.039 fused_ordering(666) 00:12:56.039 fused_ordering(667) 00:12:56.039 fused_ordering(668) 00:12:56.039 fused_ordering(669) 00:12:56.039 fused_ordering(670) 00:12:56.039 fused_ordering(671) 00:12:56.039 fused_ordering(672) 00:12:56.039 fused_ordering(673) 00:12:56.039 fused_ordering(674) 00:12:56.039 fused_ordering(675) 00:12:56.039 fused_ordering(676) 00:12:56.039 fused_ordering(677) 00:12:56.039 fused_ordering(678) 00:12:56.039 fused_ordering(679) 00:12:56.039 fused_ordering(680) 00:12:56.039 fused_ordering(681) 00:12:56.039 fused_ordering(682) 00:12:56.039 fused_ordering(683) 00:12:56.039 fused_ordering(684) 00:12:56.039 fused_ordering(685) 00:12:56.039 fused_ordering(686) 00:12:56.039 fused_ordering(687) 00:12:56.039 fused_ordering(688) 00:12:56.039 fused_ordering(689) 00:12:56.039 fused_ordering(690) 00:12:56.039 fused_ordering(691) 00:12:56.039 fused_ordering(692) 00:12:56.039 fused_ordering(693) 00:12:56.039 fused_ordering(694) 00:12:56.039 fused_ordering(695) 00:12:56.039 fused_ordering(696) 00:12:56.039 fused_ordering(697) 00:12:56.039 fused_ordering(698) 00:12:56.039 fused_ordering(699) 00:12:56.039 fused_ordering(700) 00:12:56.039 fused_ordering(701) 00:12:56.039 fused_ordering(702) 00:12:56.039 fused_ordering(703) 00:12:56.039 fused_ordering(704) 00:12:56.039 fused_ordering(705) 00:12:56.039 fused_ordering(706) 00:12:56.039 fused_ordering(707) 00:12:56.039 fused_ordering(708) 00:12:56.039 fused_ordering(709) 00:12:56.039 fused_ordering(710) 00:12:56.039 fused_ordering(711) 00:12:56.039 fused_ordering(712) 00:12:56.039 fused_ordering(713) 00:12:56.039 fused_ordering(714) 00:12:56.039 fused_ordering(715) 00:12:56.039 fused_ordering(716) 00:12:56.039 fused_ordering(717) 00:12:56.039 fused_ordering(718) 00:12:56.039 fused_ordering(719) 00:12:56.039 fused_ordering(720) 00:12:56.039 fused_ordering(721) 00:12:56.039 fused_ordering(722) 00:12:56.039 fused_ordering(723) 00:12:56.039 fused_ordering(724) 00:12:56.039 fused_ordering(725) 00:12:56.039 fused_ordering(726) 00:12:56.039 fused_ordering(727) 00:12:56.039 fused_ordering(728) 00:12:56.039 fused_ordering(729) 00:12:56.039 fused_ordering(730) 00:12:56.039 fused_ordering(731) 00:12:56.039 fused_ordering(732) 00:12:56.039 fused_ordering(733) 00:12:56.039 fused_ordering(734) 00:12:56.040 fused_ordering(735) 00:12:56.040 fused_ordering(736) 00:12:56.040 fused_ordering(737) 00:12:56.040 fused_ordering(738) 00:12:56.040 fused_ordering(739) 00:12:56.040 fused_ordering(740) 00:12:56.040 fused_ordering(741) 00:12:56.040 fused_ordering(742) 00:12:56.040 fused_ordering(743) 00:12:56.040 fused_ordering(744) 00:12:56.040 fused_ordering(745) 00:12:56.040 fused_ordering(746) 00:12:56.040 fused_ordering(747) 00:12:56.040 fused_ordering(748) 00:12:56.040 fused_ordering(749) 00:12:56.040 fused_ordering(750) 00:12:56.040 fused_ordering(751) 00:12:56.040 fused_ordering(752) 00:12:56.040 fused_ordering(753) 00:12:56.040 fused_ordering(754) 00:12:56.040 fused_ordering(755) 00:12:56.040 fused_ordering(756) 00:12:56.040 fused_ordering(757) 00:12:56.040 fused_ordering(758) 00:12:56.040 fused_ordering(759) 00:12:56.040 fused_ordering(760) 00:12:56.040 fused_ordering(761) 00:12:56.040 fused_ordering(762) 00:12:56.040 fused_ordering(763) 00:12:56.040 fused_ordering(764) 00:12:56.040 fused_ordering(765) 00:12:56.040 fused_ordering(766) 00:12:56.040 fused_ordering(767) 00:12:56.040 fused_ordering(768) 00:12:56.040 fused_ordering(769) 00:12:56.040 fused_ordering(770) 00:12:56.040 fused_ordering(771) 00:12:56.040 fused_ordering(772) 00:12:56.040 fused_ordering(773) 00:12:56.040 fused_ordering(774) 00:12:56.040 fused_ordering(775) 00:12:56.040 fused_ordering(776) 00:12:56.040 fused_ordering(777) 00:12:56.040 fused_ordering(778) 00:12:56.040 fused_ordering(779) 00:12:56.040 fused_ordering(780) 00:12:56.040 fused_ordering(781) 00:12:56.040 fused_ordering(782) 00:12:56.040 fused_ordering(783) 00:12:56.040 fused_ordering(784) 00:12:56.040 fused_ordering(785) 00:12:56.040 fused_ordering(786) 00:12:56.040 fused_ordering(787) 00:12:56.040 fused_ordering(788) 00:12:56.040 fused_ordering(789) 00:12:56.040 fused_ordering(790) 00:12:56.040 fused_ordering(791) 00:12:56.040 fused_ordering(792) 00:12:56.040 fused_ordering(793) 00:12:56.040 fused_ordering(794) 00:12:56.040 fused_ordering(795) 00:12:56.040 fused_ordering(796) 00:12:56.040 fused_ordering(797) 00:12:56.040 fused_ordering(798) 00:12:56.040 fused_ordering(799) 00:12:56.040 fused_ordering(800) 00:12:56.040 fused_ordering(801) 00:12:56.040 fused_ordering(802) 00:12:56.040 fused_ordering(803) 00:12:56.040 fused_ordering(804) 00:12:56.040 fused_ordering(805) 00:12:56.040 fused_ordering(806) 00:12:56.040 fused_ordering(807) 00:12:56.040 fused_ordering(808) 00:12:56.040 fused_ordering(809) 00:12:56.040 fused_ordering(810) 00:12:56.040 fused_ordering(811) 00:12:56.040 fused_ordering(812) 00:12:56.040 fused_ordering(813) 00:12:56.040 fused_ordering(814) 00:12:56.040 fused_ordering(815) 00:12:56.040 fused_ordering(816) 00:12:56.040 fused_ordering(817) 00:12:56.040 fused_ordering(818) 00:12:56.040 fused_ordering(819) 00:12:56.040 fused_ordering(820) 00:12:56.608 fused_ordering(821) 00:12:56.608 fused_ordering(822) 00:12:56.608 fused_ordering(823) 00:12:56.608 fused_ordering(824) 00:12:56.608 fused_ordering(825) 00:12:56.608 fused_ordering(826) 00:12:56.608 fused_ordering(827) 00:12:56.608 fused_ordering(828) 00:12:56.608 fused_ordering(829) 00:12:56.608 fused_ordering(830) 00:12:56.608 fused_ordering(831) 00:12:56.608 fused_ordering(832) 00:12:56.608 fused_ordering(833) 00:12:56.608 fused_ordering(834) 00:12:56.608 fused_ordering(835) 00:12:56.608 fused_ordering(836) 00:12:56.608 fused_ordering(837) 00:12:56.608 fused_ordering(838) 00:12:56.608 fused_ordering(839) 00:12:56.608 fused_ordering(840) 00:12:56.608 fused_ordering(841) 00:12:56.608 fused_ordering(842) 00:12:56.608 fused_ordering(843) 00:12:56.608 fused_ordering(844) 00:12:56.608 fused_ordering(845) 00:12:56.608 fused_ordering(846) 00:12:56.608 fused_ordering(847) 00:12:56.608 fused_ordering(848) 00:12:56.608 fused_ordering(849) 00:12:56.608 fused_ordering(850) 00:12:56.608 fused_ordering(851) 00:12:56.608 fused_ordering(852) 00:12:56.608 fused_ordering(853) 00:12:56.608 fused_ordering(854) 00:12:56.608 fused_ordering(855) 00:12:56.608 fused_ordering(856) 00:12:56.608 fused_ordering(857) 00:12:56.608 fused_ordering(858) 00:12:56.608 fused_ordering(859) 00:12:56.608 fused_ordering(860) 00:12:56.608 fused_ordering(861) 00:12:56.608 fused_ordering(862) 00:12:56.608 fused_ordering(863) 00:12:56.608 fused_ordering(864) 00:12:56.608 fused_ordering(865) 00:12:56.608 fused_ordering(866) 00:12:56.608 fused_ordering(867) 00:12:56.608 fused_ordering(868) 00:12:56.608 fused_ordering(869) 00:12:56.608 fused_ordering(870) 00:12:56.608 fused_ordering(871) 00:12:56.608 fused_ordering(872) 00:12:56.608 fused_ordering(873) 00:12:56.608 fused_ordering(874) 00:12:56.608 fused_ordering(875) 00:12:56.608 fused_ordering(876) 00:12:56.608 fused_ordering(877) 00:12:56.608 fused_ordering(878) 00:12:56.608 fused_ordering(879) 00:12:56.608 fused_ordering(880) 00:12:56.608 fused_ordering(881) 00:12:56.608 fused_ordering(882) 00:12:56.608 fused_ordering(883) 00:12:56.608 fused_ordering(884) 00:12:56.608 fused_ordering(885) 00:12:56.608 fused_ordering(886) 00:12:56.608 fused_ordering(887) 00:12:56.608 fused_ordering(888) 00:12:56.608 fused_ordering(889) 00:12:56.608 fused_ordering(890) 00:12:56.608 fused_ordering(891) 00:12:56.608 fused_ordering(892) 00:12:56.608 fused_ordering(893) 00:12:56.608 fused_ordering(894) 00:12:56.608 fused_ordering(895) 00:12:56.608 fused_ordering(896) 00:12:56.608 fused_ordering(897) 00:12:56.608 fused_ordering(898) 00:12:56.608 fused_ordering(899) 00:12:56.608 fused_ordering(900) 00:12:56.608 fused_ordering(901) 00:12:56.608 fused_ordering(902) 00:12:56.608 fused_ordering(903) 00:12:56.608 fused_ordering(904) 00:12:56.608 fused_ordering(905) 00:12:56.608 fused_ordering(906) 00:12:56.608 fused_ordering(907) 00:12:56.608 fused_ordering(908) 00:12:56.608 fused_ordering(909) 00:12:56.608 fused_ordering(910) 00:12:56.608 fused_ordering(911) 00:12:56.608 fused_ordering(912) 00:12:56.608 fused_ordering(913) 00:12:56.608 fused_ordering(914) 00:12:56.608 fused_ordering(915) 00:12:56.608 fused_ordering(916) 00:12:56.608 fused_ordering(917) 00:12:56.608 fused_ordering(918) 00:12:56.608 fused_ordering(919) 00:12:56.608 fused_ordering(920) 00:12:56.608 fused_ordering(921) 00:12:56.608 fused_ordering(922) 00:12:56.608 fused_ordering(923) 00:12:56.608 fused_ordering(924) 00:12:56.608 fused_ordering(925) 00:12:56.608 fused_ordering(926) 00:12:56.608 fused_ordering(927) 00:12:56.608 fused_ordering(928) 00:12:56.608 fused_ordering(929) 00:12:56.608 fused_ordering(930) 00:12:56.608 fused_ordering(931) 00:12:56.608 fused_ordering(932) 00:12:56.608 fused_ordering(933) 00:12:56.608 fused_ordering(934) 00:12:56.608 fused_ordering(935) 00:12:56.608 fused_ordering(936) 00:12:56.608 fused_ordering(937) 00:12:56.608 fused_ordering(938) 00:12:56.608 fused_ordering(939) 00:12:56.608 fused_ordering(940) 00:12:56.608 fused_ordering(941) 00:12:56.608 fused_ordering(942) 00:12:56.608 fused_ordering(943) 00:12:56.608 fused_ordering(944) 00:12:56.608 fused_ordering(945) 00:12:56.608 fused_ordering(946) 00:12:56.608 fused_ordering(947) 00:12:56.608 fused_ordering(948) 00:12:56.608 fused_ordering(949) 00:12:56.608 fused_ordering(950) 00:12:56.608 fused_ordering(951) 00:12:56.608 fused_ordering(952) 00:12:56.608 fused_ordering(953) 00:12:56.608 fused_ordering(954) 00:12:56.608 fused_ordering(955) 00:12:56.608 fused_ordering(956) 00:12:56.608 fused_ordering(957) 00:12:56.608 fused_ordering(958) 00:12:56.608 fused_ordering(959) 00:12:56.608 fused_ordering(960) 00:12:56.608 fused_ordering(961) 00:12:56.608 fused_ordering(962) 00:12:56.608 fused_ordering(963) 00:12:56.608 fused_ordering(964) 00:12:56.608 fused_ordering(965) 00:12:56.608 fused_ordering(966) 00:12:56.608 fused_ordering(967) 00:12:56.608 fused_ordering(968) 00:12:56.608 fused_ordering(969) 00:12:56.608 fused_ordering(970) 00:12:56.608 fused_ordering(971) 00:12:56.608 fused_ordering(972) 00:12:56.608 fused_ordering(973) 00:12:56.608 fused_ordering(974) 00:12:56.608 fused_ordering(975) 00:12:56.608 fused_ordering(976) 00:12:56.608 fused_ordering(977) 00:12:56.608 fused_ordering(978) 00:12:56.608 fused_ordering(979) 00:12:56.608 fused_ordering(980) 00:12:56.608 fused_ordering(981) 00:12:56.608 fused_ordering(982) 00:12:56.608 fused_ordering(983) 00:12:56.608 fused_ordering(984) 00:12:56.608 fused_ordering(985) 00:12:56.608 fused_ordering(986) 00:12:56.608 fused_ordering(987) 00:12:56.608 fused_ordering(988) 00:12:56.608 fused_ordering(989) 00:12:56.608 fused_ordering(990) 00:12:56.608 fused_ordering(991) 00:12:56.608 fused_ordering(992) 00:12:56.608 fused_ordering(993) 00:12:56.608 fused_ordering(994) 00:12:56.608 fused_ordering(995) 00:12:56.608 fused_ordering(996) 00:12:56.608 fused_ordering(997) 00:12:56.608 fused_ordering(998) 00:12:56.608 fused_ordering(999) 00:12:56.608 fused_ordering(1000) 00:12:56.608 fused_ordering(1001) 00:12:56.608 fused_ordering(1002) 00:12:56.608 fused_ordering(1003) 00:12:56.608 fused_ordering(1004) 00:12:56.608 fused_ordering(1005) 00:12:56.608 fused_ordering(1006) 00:12:56.608 fused_ordering(1007) 00:12:56.608 fused_ordering(1008) 00:12:56.608 fused_ordering(1009) 00:12:56.608 fused_ordering(1010) 00:12:56.608 fused_ordering(1011) 00:12:56.608 fused_ordering(1012) 00:12:56.608 fused_ordering(1013) 00:12:56.608 fused_ordering(1014) 00:12:56.608 fused_ordering(1015) 00:12:56.608 fused_ordering(1016) 00:12:56.608 fused_ordering(1017) 00:12:56.608 fused_ordering(1018) 00:12:56.608 fused_ordering(1019) 00:12:56.608 fused_ordering(1020) 00:12:56.608 fused_ordering(1021) 00:12:56.608 fused_ordering(1022) 00:12:56.608 fused_ordering(1023) 00:12:56.608 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:56.608 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.609 rmmod nvme_tcp 00:12:56.609 rmmod nvme_fabrics 00:12:56.609 rmmod nvme_keyring 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 77200 ']' 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 77200 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 77200 ']' 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 77200 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77200 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77200' 00:12:56.609 killing process with pid 77200 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 77200 00:12:56.609 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 77200 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:56.868 00:12:56.868 real 0m3.700s 00:12:56.868 user 0m4.160s 00:12:56.868 sys 0m1.215s 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.868 ************************************ 00:12:56.868 END TEST nvmf_fused_ordering 00:12:56.868 ************************************ 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.868 07:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.128 ************************************ 00:12:57.129 START TEST nvmf_ns_masking 00:12:57.129 ************************************ 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:57.129 * Looking for test storage... 00:12:57.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6d4ab4b3-8fac-436d-ad01-e47f8e41c16c 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2cdc2abc-59d8-41f9-8f90-b2d19ac32a0f 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=85077d30-c7cd-480e-86f8-404caf855bce 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:57.129 Cannot find device "nvmf_tgt_br" 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:12:57.129 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.391 Cannot find device "nvmf_tgt_br2" 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:57.391 Cannot find device "nvmf_tgt_br" 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:57.391 Cannot find device "nvmf_tgt_br2" 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:57.391 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:57.391 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:57.650 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:57.650 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:57.650 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:57.650 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:57.650 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:57.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:12:57.650 00:12:57.650 --- 10.0.0.2 ping statistics --- 00:12:57.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.650 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:57.650 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:57.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:57.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:12:57.650 00:12:57.650 --- 10.0.0.3 ping statistics --- 00:12:57.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.650 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:57.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:12:57.651 00:12:57.651 --- 10.0.0.1 ping statistics --- 00:12:57.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.651 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=77442 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 77442 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 77442 ']' 00:12:57.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.651 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.651 [2024-07-25 07:27:30.274768] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:12:57.651 [2024-07-25 07:27:30.274920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.909 [2024-07-25 07:27:30.398440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.909 [2024-07-25 07:27:30.519913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.909 [2024-07-25 07:27:30.520060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.909 [2024-07-25 07:27:30.520100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.909 [2024-07-25 07:27:30.520106] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.909 [2024-07-25 07:27:30.520111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.909 [2024-07-25 07:27:30.520156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.476 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:58.476 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:58.476 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:58.476 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:58.476 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:58.476 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.476 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:58.735 [2024-07-25 07:27:31.380816] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.735 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:58.735 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:58.735 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:58.999 Malloc1 00:12:58.999 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:59.257 Malloc2 00:12:59.257 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.515 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:59.775 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.776 [2024-07-25 07:27:32.481162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.776 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:59.776 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85077d30-c7cd-480e-86f8-404caf855bce -a 10.0.0.2 -s 4420 -i 4 00:13:00.036 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.036 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:13:00.036 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.036 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:00.036 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:01.939 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.197 [ 0]:0x1 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9808440ead554a4e9f926d1c5211975d 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9808440ead554a4e9f926d1c5211975d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.197 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.455 [ 0]:0x1 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9808440ead554a4e9f926d1c5211975d 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9808440ead554a4e9f926d1c5211975d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:02.455 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.456 [ 1]:0x2 00:13:02.456 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.456 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:02.456 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52aa69e94f2a498dbc1f5bb99af94c43 00:13:02.456 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52aa69e94f2a498dbc1f5bb99af94c43 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.456 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:02.456 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.456 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.715 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85077d30-c7cd-480e-86f8-404caf855bce -a 10.0.0.2 -s 4420 -i 4 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:13:03.026 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:13:04.934 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:04.934 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:04.934 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.935 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:04.935 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.935 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:13:04.935 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:04.935 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.194 [ 0]:0x2 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52aa69e94f2a498dbc1f5bb99af94c43 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52aa69e94f2a498dbc1f5bb99af94c43 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.194 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.454 [ 0]:0x1 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9808440ead554a4e9f926d1c5211975d 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9808440ead554a4e9f926d1c5211975d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.454 [ 1]:0x2 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52aa69e94f2a498dbc1f5bb99af94c43 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52aa69e94f2a498dbc1f5bb99af94c43 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.454 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.714 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.973 [ 0]:0x2 00:13:05.973 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.973 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.973 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52aa69e94f2a498dbc1f5bb99af94c43 00:13:05.973 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52aa69e94f2a498dbc1f5bb99af94c43 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.973 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:05.973 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.973 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85077d30-c7cd-480e-86f8-404caf855bce -a 10.0.0.2 -s 4420 -i 4 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:13:06.232 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:13:08.140 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:08.140 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:08.140 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.140 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:13:08.140 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.140 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.401 [ 0]:0x1 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9808440ead554a4e9f926d1c5211975d 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9808440ead554a4e9f926d1c5211975d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.401 [ 1]:0x2 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.401 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.401 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52aa69e94f2a498dbc1f5bb99af94c43 00:13:08.401 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52aa69e94f2a498dbc1f5bb99af94c43 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.401 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.660 [ 0]:0x2 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.660 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52aa69e94f2a498dbc1f5bb99af94c43 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52aa69e94f2a498dbc1f5bb99af94c43 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:08.920 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:08.920 [2024-07-25 07:27:41.646244] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:08.920 2024/07/25 07:27:41 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:13:08.920 request: 00:13:08.920 { 00:13:08.920 "method": "nvmf_ns_remove_host", 00:13:08.920 "params": { 00:13:08.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.920 "nsid": 2, 00:13:08.920 "host": "nqn.2016-06.io.spdk:host1" 00:13:08.920 } 00:13:08.920 } 00:13:08.920 Got JSON-RPC error response 00:13:08.920 GoRPCClient: error on JSON-RPC call 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:09.179 [ 0]:0x2 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52aa69e94f2a498dbc1f5bb99af94c43 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52aa69e94f2a498dbc1f5bb99af94c43 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=77806 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 77806 /var/tmp/host.sock 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 77806 ']' 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:09.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.179 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.179 [2024-07-25 07:27:41.898913] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:13:09.179 [2024-07-25 07:27:41.899087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77806 ] 00:13:09.438 [2024-07-25 07:27:42.036895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.438 [2024-07-25 07:27:42.142995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.373 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.373 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:10.373 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.373 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.631 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6d4ab4b3-8fac-436d-ad01-e47f8e41c16c 00:13:10.631 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:10.631 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6D4AB4B38FAC436DAD01E47F8E41C16C -i 00:13:10.889 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2cdc2abc-59d8-41f9-8f90-b2d19ac32a0f 00:13:10.889 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:10.889 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2CDC2ABC59D841F98F90B2D19AC32A0F -i 00:13:11.147 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:11.405 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:11.663 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:11.663 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:11.922 nvme0n1 00:13:11.922 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:11.922 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:12.180 nvme1n2 00:13:12.180 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:12.180 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:12.180 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:12.180 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:12.180 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:12.439 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:12.439 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:12.439 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:12.439 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:12.700 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6d4ab4b3-8fac-436d-ad01-e47f8e41c16c == \6\d\4\a\b\4\b\3\-\8\f\a\c\-\4\3\6\d\-\a\d\0\1\-\e\4\7\f\8\e\4\1\c\1\6\c ]] 00:13:12.700 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:12.700 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:12.700 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2cdc2abc-59d8-41f9-8f90-b2d19ac32a0f == \2\c\d\c\2\a\b\c\-\5\9\d\8\-\4\1\f\9\-\8\f\9\0\-\b\2\d\1\9\a\c\3\2\a\0\f ]] 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 77806 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 77806 ']' 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 77806 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77806 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:12.959 killing process with pid 77806 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77806' 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 77806 00:13:12.959 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 77806 00:13:13.527 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.527 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:13.527 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:13.527 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.527 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.786 rmmod nvme_tcp 00:13:13.786 rmmod nvme_fabrics 00:13:13.786 rmmod nvme_keyring 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 77442 ']' 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 77442 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 77442 ']' 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 77442 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77442 00:13:13.786 killing process with pid 77442 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77442' 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 77442 00:13:13.786 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 77442 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:14.045 00:13:14.045 real 0m17.045s 00:13:14.045 user 0m26.388s 00:13:14.045 sys 0m2.696s 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.045 ************************************ 00:13:14.045 END TEST nvmf_ns_masking 00:13:14.045 ************************************ 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.045 ************************************ 00:13:14.045 START TEST nvmf_auth_target 00:13:14.045 ************************************ 00:13:14.045 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:14.305 * Looking for test storage... 00:13:14.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:14.305 Cannot find device "nvmf_tgt_br" 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:14.305 Cannot find device "nvmf_tgt_br2" 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:14.305 Cannot find device "nvmf_tgt_br" 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:14.305 Cannot find device "nvmf_tgt_br2" 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:13:14.305 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:14.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:14.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:14.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:13:14.566 00:13:14.566 --- 10.0.0.2 ping statistics --- 00:13:14.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.566 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:14.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:14.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:13:14.566 00:13:14.566 --- 10.0.0.3 ping statistics --- 00:13:14.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.566 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:14.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:14.566 00:13:14.566 --- 10.0.0.1 ping statistics --- 00:13:14.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.566 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=78171 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 78171 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78171 ']' 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.566 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.567 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.567 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=78215 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=992fb012a51e1d53104d6dfdf598aa4be337479fe4f30fd7 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xGC 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 992fb012a51e1d53104d6dfdf598aa4be337479fe4f30fd7 0 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 992fb012a51e1d53104d6dfdf598aa4be337479fe4f30fd7 0 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=992fb012a51e1d53104d6dfdf598aa4be337479fe4f30fd7 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xGC 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xGC 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.xGC 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f72626103c7a03ea97d570c984b9cb9d026c05a1ba66fd3aad8768b2638da629 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sxZ 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f72626103c7a03ea97d570c984b9cb9d026c05a1ba66fd3aad8768b2638da629 3 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f72626103c7a03ea97d570c984b9cb9d026c05a1ba66fd3aad8768b2638da629 3 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f72626103c7a03ea97d570c984b9cb9d026c05a1ba66fd3aad8768b2638da629 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sxZ 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sxZ 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.sxZ 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=299a8fefec6e97f534743908025daeec 00:13:15.945 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aD9 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 299a8fefec6e97f534743908025daeec 1 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 299a8fefec6e97f534743908025daeec 1 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=299a8fefec6e97f534743908025daeec 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aD9 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aD9 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.aD9 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53e8fd679846072cf3368cffa3f81ad893393689c74f3269 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5TI 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53e8fd679846072cf3368cffa3f81ad893393689c74f3269 2 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53e8fd679846072cf3368cffa3f81ad893393689c74f3269 2 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53e8fd679846072cf3368cffa3f81ad893393689c74f3269 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5TI 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5TI 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.5TI 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=131bcfb6f1ded3f753dccea8340b9e590761f3692b96fd69 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hwt 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 131bcfb6f1ded3f753dccea8340b9e590761f3692b96fd69 2 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 131bcfb6f1ded3f753dccea8340b9e590761f3692b96fd69 2 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=131bcfb6f1ded3f753dccea8340b9e590761f3692b96fd69 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hwt 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hwt 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.hwt 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4db024f109364a6093448e5f7f62b0ea 00:13:15.946 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mtl 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4db024f109364a6093448e5f7f62b0ea 1 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4db024f109364a6093448e5f7f62b0ea 1 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4db024f109364a6093448e5f7f62b0ea 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mtl 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mtl 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.mtl 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3be97062f0aa6bdc796fd77af636bf3a6890f2d4075e9223c380bf7eb6002f04 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.P9f 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3be97062f0aa6bdc796fd77af636bf3a6890f2d4075e9223c380bf7eb6002f04 3 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3be97062f0aa6bdc796fd77af636bf3a6890f2d4075e9223c380bf7eb6002f04 3 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3be97062f0aa6bdc796fd77af636bf3a6890f2d4075e9223c380bf7eb6002f04 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.P9f 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.P9f 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.P9f 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 78171 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78171 ']' 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.206 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 78215 /var/tmp/host.sock 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78215 ']' 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.466 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xGC 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xGC 00:13:16.726 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xGC 00:13:16.985 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.sxZ ]] 00:13:16.985 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sxZ 00:13:16.985 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.985 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.985 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.985 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sxZ 00:13:16.985 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sxZ 00:13:17.245 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:17.245 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aD9 00:13:17.245 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.245 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.245 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.245 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.aD9 00:13:17.245 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.aD9 00:13:17.504 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.5TI ]] 00:13:17.504 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5TI 00:13:17.504 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.504 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.504 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.504 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5TI 00:13:17.504 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5TI 00:13:17.762 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:17.762 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hwt 00:13:17.762 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.762 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.762 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.762 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.hwt 00:13:17.762 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.hwt 00:13:18.021 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.mtl ]] 00:13:18.021 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mtl 00:13:18.021 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.021 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.021 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.021 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mtl 00:13:18.021 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mtl 00:13:18.280 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:18.280 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.P9f 00:13:18.280 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.280 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.280 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.280 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.P9f 00:13:18.280 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.P9f 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.538 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.796 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.796 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.796 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.056 00:13:19.056 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.056 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.056 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.316 { 00:13:19.316 "auth": { 00:13:19.316 "dhgroup": "null", 00:13:19.316 "digest": "sha256", 00:13:19.316 "state": "completed" 00:13:19.316 }, 00:13:19.316 "cntlid": 1, 00:13:19.316 "listen_address": { 00:13:19.316 "adrfam": "IPv4", 00:13:19.316 "traddr": "10.0.0.2", 00:13:19.316 "trsvcid": "4420", 00:13:19.316 "trtype": "TCP" 00:13:19.316 }, 00:13:19.316 "peer_address": { 00:13:19.316 "adrfam": "IPv4", 00:13:19.316 "traddr": "10.0.0.1", 00:13:19.316 "trsvcid": "50770", 00:13:19.316 "trtype": "TCP" 00:13:19.316 }, 00:13:19.316 "qid": 0, 00:13:19.316 "state": "enabled", 00:13:19.316 "thread": "nvmf_tgt_poll_group_000" 00:13:19.316 } 00:13:19.316 ]' 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.316 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.575 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:23.781 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.040 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.325 00:13:24.325 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.325 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.325 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.582 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.582 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.582 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.582 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.582 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.582 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.582 { 00:13:24.582 "auth": { 00:13:24.582 "dhgroup": "null", 00:13:24.582 "digest": "sha256", 00:13:24.582 "state": "completed" 00:13:24.582 }, 00:13:24.582 "cntlid": 3, 00:13:24.582 "listen_address": { 00:13:24.582 "adrfam": "IPv4", 00:13:24.582 "traddr": "10.0.0.2", 00:13:24.582 "trsvcid": "4420", 00:13:24.582 "trtype": "TCP" 00:13:24.582 }, 00:13:24.582 "peer_address": { 00:13:24.582 "adrfam": "IPv4", 00:13:24.582 "traddr": "10.0.0.1", 00:13:24.582 "trsvcid": "50810", 00:13:24.582 "trtype": "TCP" 00:13:24.582 }, 00:13:24.582 "qid": 0, 00:13:24.582 "state": "enabled", 00:13:24.582 "thread": "nvmf_tgt_poll_group_000" 00:13:24.582 } 00:13:24.582 ]' 00:13:24.582 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.840 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.840 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.840 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:24.840 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.840 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.840 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.840 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.098 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:26.032 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.290 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.548 00:13:26.548 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.548 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.548 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.807 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.807 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.807 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.807 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.807 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.807 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.807 { 00:13:26.807 "auth": { 00:13:26.807 "dhgroup": "null", 00:13:26.807 "digest": "sha256", 00:13:26.807 "state": "completed" 00:13:26.807 }, 00:13:26.807 "cntlid": 5, 00:13:26.807 "listen_address": { 00:13:26.807 "adrfam": "IPv4", 00:13:26.807 "traddr": "10.0.0.2", 00:13:26.807 "trsvcid": "4420", 00:13:26.807 "trtype": "TCP" 00:13:26.807 }, 00:13:26.807 "peer_address": { 00:13:26.807 "adrfam": "IPv4", 00:13:26.807 "traddr": "10.0.0.1", 00:13:26.807 "trsvcid": "43244", 00:13:26.807 "trtype": "TCP" 00:13:26.807 }, 00:13:26.807 "qid": 0, 00:13:26.807 "state": "enabled", 00:13:26.807 "thread": "nvmf_tgt_poll_group_000" 00:13:26.807 } 00:13:26.807 ]' 00:13:26.807 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.065 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.065 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.065 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:27.065 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.065 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.065 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.065 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.323 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:28.257 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.515 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.773 00:13:28.773 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.773 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.773 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.338 { 00:13:29.338 "auth": { 00:13:29.338 "dhgroup": "null", 00:13:29.338 "digest": "sha256", 00:13:29.338 "state": "completed" 00:13:29.338 }, 00:13:29.338 "cntlid": 7, 00:13:29.338 "listen_address": { 00:13:29.338 "adrfam": "IPv4", 00:13:29.338 "traddr": "10.0.0.2", 00:13:29.338 "trsvcid": "4420", 00:13:29.338 "trtype": "TCP" 00:13:29.338 }, 00:13:29.338 "peer_address": { 00:13:29.338 "adrfam": "IPv4", 00:13:29.338 "traddr": "10.0.0.1", 00:13:29.338 "trsvcid": "43284", 00:13:29.338 "trtype": "TCP" 00:13:29.338 }, 00:13:29.338 "qid": 0, 00:13:29.338 "state": "enabled", 00:13:29.338 "thread": "nvmf_tgt_poll_group_000" 00:13:29.338 } 00:13:29.338 ]' 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.338 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.339 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.596 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.531 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.793 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.053 00:13:31.053 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.053 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.053 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.311 { 00:13:31.311 "auth": { 00:13:31.311 "dhgroup": "ffdhe2048", 00:13:31.311 "digest": "sha256", 00:13:31.311 "state": "completed" 00:13:31.311 }, 00:13:31.311 "cntlid": 9, 00:13:31.311 "listen_address": { 00:13:31.311 "adrfam": "IPv4", 00:13:31.311 "traddr": "10.0.0.2", 00:13:31.311 "trsvcid": "4420", 00:13:31.311 "trtype": "TCP" 00:13:31.311 }, 00:13:31.311 "peer_address": { 00:13:31.311 "adrfam": "IPv4", 00:13:31.311 "traddr": "10.0.0.1", 00:13:31.311 "trsvcid": "43318", 00:13:31.311 "trtype": "TCP" 00:13:31.311 }, 00:13:31.311 "qid": 0, 00:13:31.311 "state": "enabled", 00:13:31.311 "thread": "nvmf_tgt_poll_group_000" 00:13:31.311 } 00:13:31.311 ]' 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.311 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.569 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.569 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.569 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.569 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.569 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.826 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:32.450 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:32.708 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.709 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.966 00:13:32.967 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.967 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.967 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.225 { 00:13:33.225 "auth": { 00:13:33.225 "dhgroup": "ffdhe2048", 00:13:33.225 "digest": "sha256", 00:13:33.225 "state": "completed" 00:13:33.225 }, 00:13:33.225 "cntlid": 11, 00:13:33.225 "listen_address": { 00:13:33.225 "adrfam": "IPv4", 00:13:33.225 "traddr": "10.0.0.2", 00:13:33.225 "trsvcid": "4420", 00:13:33.225 "trtype": "TCP" 00:13:33.225 }, 00:13:33.225 "peer_address": { 00:13:33.225 "adrfam": "IPv4", 00:13:33.225 "traddr": "10.0.0.1", 00:13:33.225 "trsvcid": "43362", 00:13:33.225 "trtype": "TCP" 00:13:33.225 }, 00:13:33.225 "qid": 0, 00:13:33.225 "state": "enabled", 00:13:33.225 "thread": "nvmf_tgt_poll_group_000" 00:13:33.225 } 00:13:33.225 ]' 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.225 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.483 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:34.048 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:34.306 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:34.306 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.692 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.692 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.949 { 00:13:34.949 "auth": { 00:13:34.949 "dhgroup": "ffdhe2048", 00:13:34.949 "digest": "sha256", 00:13:34.949 "state": "completed" 00:13:34.949 }, 00:13:34.949 "cntlid": 13, 00:13:34.949 "listen_address": { 00:13:34.949 "adrfam": "IPv4", 00:13:34.949 "traddr": "10.0.0.2", 00:13:34.949 "trsvcid": "4420", 00:13:34.949 "trtype": "TCP" 00:13:34.949 }, 00:13:34.949 "peer_address": { 00:13:34.949 "adrfam": "IPv4", 00:13:34.949 "traddr": "10.0.0.1", 00:13:34.949 "trsvcid": "43398", 00:13:34.949 "trtype": "TCP" 00:13:34.949 }, 00:13:34.949 "qid": 0, 00:13:34.949 "state": "enabled", 00:13:34.949 "thread": "nvmf_tgt_poll_group_000" 00:13:34.949 } 00:13:34.949 ]' 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.949 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.207 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.207 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:35.207 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.207 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.207 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.464 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:36.398 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.398 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.399 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.399 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.964 00:13:36.964 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.964 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.964 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.222 { 00:13:37.222 "auth": { 00:13:37.222 "dhgroup": "ffdhe2048", 00:13:37.222 "digest": "sha256", 00:13:37.222 "state": "completed" 00:13:37.222 }, 00:13:37.222 "cntlid": 15, 00:13:37.222 "listen_address": { 00:13:37.222 "adrfam": "IPv4", 00:13:37.222 "traddr": "10.0.0.2", 00:13:37.222 "trsvcid": "4420", 00:13:37.222 "trtype": "TCP" 00:13:37.222 }, 00:13:37.222 "peer_address": { 00:13:37.222 "adrfam": "IPv4", 00:13:37.222 "traddr": "10.0.0.1", 00:13:37.222 "trsvcid": "43198", 00:13:37.222 "trtype": "TCP" 00:13:37.222 }, 00:13:37.222 "qid": 0, 00:13:37.222 "state": "enabled", 00:13:37.222 "thread": "nvmf_tgt_poll_group_000" 00:13:37.222 } 00:13:37.222 ]' 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:37.222 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.485 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.485 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.485 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.746 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:13:38.311 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.312 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.570 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.135 00:13:39.135 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.135 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.135 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.394 { 00:13:39.394 "auth": { 00:13:39.394 "dhgroup": "ffdhe3072", 00:13:39.394 "digest": "sha256", 00:13:39.394 "state": "completed" 00:13:39.394 }, 00:13:39.394 "cntlid": 17, 00:13:39.394 "listen_address": { 00:13:39.394 "adrfam": "IPv4", 00:13:39.394 "traddr": "10.0.0.2", 00:13:39.394 "trsvcid": "4420", 00:13:39.394 "trtype": "TCP" 00:13:39.394 }, 00:13:39.394 "peer_address": { 00:13:39.394 "adrfam": "IPv4", 00:13:39.394 "traddr": "10.0.0.1", 00:13:39.394 "trsvcid": "43242", 00:13:39.394 "trtype": "TCP" 00:13:39.394 }, 00:13:39.394 "qid": 0, 00:13:39.394 "state": "enabled", 00:13:39.394 "thread": "nvmf_tgt_poll_group_000" 00:13:39.394 } 00:13:39.394 ]' 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.394 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.394 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.394 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.394 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.394 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.394 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.652 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.588 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.159 00:13:41.159 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.159 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.159 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.417 { 00:13:41.417 "auth": { 00:13:41.417 "dhgroup": "ffdhe3072", 00:13:41.417 "digest": "sha256", 00:13:41.417 "state": "completed" 00:13:41.417 }, 00:13:41.417 "cntlid": 19, 00:13:41.417 "listen_address": { 00:13:41.417 "adrfam": "IPv4", 00:13:41.417 "traddr": "10.0.0.2", 00:13:41.417 "trsvcid": "4420", 00:13:41.417 "trtype": "TCP" 00:13:41.417 }, 00:13:41.417 "peer_address": { 00:13:41.417 "adrfam": "IPv4", 00:13:41.417 "traddr": "10.0.0.1", 00:13:41.417 "trsvcid": "43274", 00:13:41.417 "trtype": "TCP" 00:13:41.417 }, 00:13:41.417 "qid": 0, 00:13:41.417 "state": "enabled", 00:13:41.417 "thread": "nvmf_tgt_poll_group_000" 00:13:41.417 } 00:13:41.417 ]' 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.417 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.675 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.675 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.675 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.934 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:13:42.500 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.500 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:42.500 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.500 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.759 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.759 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.759 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:42.759 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.017 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.275 00:13:43.275 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.275 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.275 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.535 { 00:13:43.535 "auth": { 00:13:43.535 "dhgroup": "ffdhe3072", 00:13:43.535 "digest": "sha256", 00:13:43.535 "state": "completed" 00:13:43.535 }, 00:13:43.535 "cntlid": 21, 00:13:43.535 "listen_address": { 00:13:43.535 "adrfam": "IPv4", 00:13:43.535 "traddr": "10.0.0.2", 00:13:43.535 "trsvcid": "4420", 00:13:43.535 "trtype": "TCP" 00:13:43.535 }, 00:13:43.535 "peer_address": { 00:13:43.535 "adrfam": "IPv4", 00:13:43.535 "traddr": "10.0.0.1", 00:13:43.535 "trsvcid": "43300", 00:13:43.535 "trtype": "TCP" 00:13:43.535 }, 00:13:43.535 "qid": 0, 00:13:43.535 "state": "enabled", 00:13:43.535 "thread": "nvmf_tgt_poll_group_000" 00:13:43.535 } 00:13:43.535 ]' 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.535 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.794 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.794 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.794 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.794 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.794 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.053 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:44.620 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.879 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.137 00:13:45.395 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.395 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.395 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.395 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.395 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.395 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.395 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.395 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.395 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.395 { 00:13:45.395 "auth": { 00:13:45.395 "dhgroup": "ffdhe3072", 00:13:45.395 "digest": "sha256", 00:13:45.395 "state": "completed" 00:13:45.395 }, 00:13:45.395 "cntlid": 23, 00:13:45.395 "listen_address": { 00:13:45.395 "adrfam": "IPv4", 00:13:45.395 "traddr": "10.0.0.2", 00:13:45.395 "trsvcid": "4420", 00:13:45.395 "trtype": "TCP" 00:13:45.395 }, 00:13:45.395 "peer_address": { 00:13:45.395 "adrfam": "IPv4", 00:13:45.395 "traddr": "10.0.0.1", 00:13:45.395 "trsvcid": "50374", 00:13:45.395 "trtype": "TCP" 00:13:45.395 }, 00:13:45.395 "qid": 0, 00:13:45.395 "state": "enabled", 00:13:45.395 "thread": "nvmf_tgt_poll_group_000" 00:13:45.395 } 00:13:45.395 ]' 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.653 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.911 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:46.477 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.735 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.993 00:13:46.993 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.994 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.994 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.254 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.254 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.254 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.254 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.254 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.254 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.254 { 00:13:47.254 "auth": { 00:13:47.254 "dhgroup": "ffdhe4096", 00:13:47.254 "digest": "sha256", 00:13:47.254 "state": "completed" 00:13:47.254 }, 00:13:47.254 "cntlid": 25, 00:13:47.254 "listen_address": { 00:13:47.254 "adrfam": "IPv4", 00:13:47.254 "traddr": "10.0.0.2", 00:13:47.254 "trsvcid": "4420", 00:13:47.254 "trtype": "TCP" 00:13:47.254 }, 00:13:47.254 "peer_address": { 00:13:47.254 "adrfam": "IPv4", 00:13:47.254 "traddr": "10.0.0.1", 00:13:47.254 "trsvcid": "50386", 00:13:47.254 "trtype": "TCP" 00:13:47.254 }, 00:13:47.255 "qid": 0, 00:13:47.255 "state": "enabled", 00:13:47.255 "thread": "nvmf_tgt_poll_group_000" 00:13:47.255 } 00:13:47.255 ]' 00:13:47.255 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.514 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.514 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.514 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:47.514 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.514 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.514 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.514 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.773 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:48.342 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.600 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.601 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.601 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.860 00:13:49.118 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.118 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.118 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.118 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.118 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.118 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.118 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.119 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.119 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.119 { 00:13:49.119 "auth": { 00:13:49.119 "dhgroup": "ffdhe4096", 00:13:49.119 "digest": "sha256", 00:13:49.119 "state": "completed" 00:13:49.119 }, 00:13:49.119 "cntlid": 27, 00:13:49.119 "listen_address": { 00:13:49.119 "adrfam": "IPv4", 00:13:49.119 "traddr": "10.0.0.2", 00:13:49.119 "trsvcid": "4420", 00:13:49.119 "trtype": "TCP" 00:13:49.119 }, 00:13:49.119 "peer_address": { 00:13:49.119 "adrfam": "IPv4", 00:13:49.119 "traddr": "10.0.0.1", 00:13:49.119 "trsvcid": "50402", 00:13:49.119 "trtype": "TCP" 00:13:49.119 }, 00:13:49.119 "qid": 0, 00:13:49.119 "state": "enabled", 00:13:49.119 "thread": "nvmf_tgt_poll_group_000" 00:13:49.119 } 00:13:49.119 ]' 00:13:49.119 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.379 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.379 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.379 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.379 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.379 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.379 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.379 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.638 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:50.207 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.467 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.468 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.727 00:13:50.987 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.987 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.987 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.987 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.987 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.987 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.987 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.246 { 00:13:51.246 "auth": { 00:13:51.246 "dhgroup": "ffdhe4096", 00:13:51.246 "digest": "sha256", 00:13:51.246 "state": "completed" 00:13:51.246 }, 00:13:51.246 "cntlid": 29, 00:13:51.246 "listen_address": { 00:13:51.246 "adrfam": "IPv4", 00:13:51.246 "traddr": "10.0.0.2", 00:13:51.246 "trsvcid": "4420", 00:13:51.246 "trtype": "TCP" 00:13:51.246 }, 00:13:51.246 "peer_address": { 00:13:51.246 "adrfam": "IPv4", 00:13:51.246 "traddr": "10.0.0.1", 00:13:51.246 "trsvcid": "50418", 00:13:51.246 "trtype": "TCP" 00:13:51.246 }, 00:13:51.246 "qid": 0, 00:13:51.246 "state": "enabled", 00:13:51.246 "thread": "nvmf_tgt_poll_group_000" 00:13:51.246 } 00:13:51.246 ]' 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.246 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.505 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:52.071 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.329 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.896 00:13:52.896 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.896 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.896 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.896 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.896 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.896 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.896 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.155 { 00:13:53.155 "auth": { 00:13:53.155 "dhgroup": "ffdhe4096", 00:13:53.155 "digest": "sha256", 00:13:53.155 "state": "completed" 00:13:53.155 }, 00:13:53.155 "cntlid": 31, 00:13:53.155 "listen_address": { 00:13:53.155 "adrfam": "IPv4", 00:13:53.155 "traddr": "10.0.0.2", 00:13:53.155 "trsvcid": "4420", 00:13:53.155 "trtype": "TCP" 00:13:53.155 }, 00:13:53.155 "peer_address": { 00:13:53.155 "adrfam": "IPv4", 00:13:53.155 "traddr": "10.0.0.1", 00:13:53.155 "trsvcid": "50444", 00:13:53.155 "trtype": "TCP" 00:13:53.155 }, 00:13:53.155 "qid": 0, 00:13:53.155 "state": "enabled", 00:13:53.155 "thread": "nvmf_tgt_poll_group_000" 00:13:53.155 } 00:13:53.155 ]' 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.155 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.413 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:53.980 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.238 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.802 00:13:54.802 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.802 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.802 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.061 { 00:13:55.061 "auth": { 00:13:55.061 "dhgroup": "ffdhe6144", 00:13:55.061 "digest": "sha256", 00:13:55.061 "state": "completed" 00:13:55.061 }, 00:13:55.061 "cntlid": 33, 00:13:55.061 "listen_address": { 00:13:55.061 "adrfam": "IPv4", 00:13:55.061 "traddr": "10.0.0.2", 00:13:55.061 "trsvcid": "4420", 00:13:55.061 "trtype": "TCP" 00:13:55.061 }, 00:13:55.061 "peer_address": { 00:13:55.061 "adrfam": "IPv4", 00:13:55.061 "traddr": "10.0.0.1", 00:13:55.061 "trsvcid": "50470", 00:13:55.061 "trtype": "TCP" 00:13:55.061 }, 00:13:55.061 "qid": 0, 00:13:55.061 "state": "enabled", 00:13:55.061 "thread": "nvmf_tgt_poll_group_000" 00:13:55.061 } 00:13:55.061 ]' 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.061 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.320 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:13:55.886 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.886 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:55.886 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.886 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.144 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.710 00:13:56.710 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.710 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.710 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.968 { 00:13:56.968 "auth": { 00:13:56.968 "dhgroup": "ffdhe6144", 00:13:56.968 "digest": "sha256", 00:13:56.968 "state": "completed" 00:13:56.968 }, 00:13:56.968 "cntlid": 35, 00:13:56.968 "listen_address": { 00:13:56.968 "adrfam": "IPv4", 00:13:56.968 "traddr": "10.0.0.2", 00:13:56.968 "trsvcid": "4420", 00:13:56.968 "trtype": "TCP" 00:13:56.968 }, 00:13:56.968 "peer_address": { 00:13:56.968 "adrfam": "IPv4", 00:13:56.968 "traddr": "10.0.0.1", 00:13:56.968 "trsvcid": "32822", 00:13:56.968 "trtype": "TCP" 00:13:56.968 }, 00:13:56.968 "qid": 0, 00:13:56.968 "state": "enabled", 00:13:56.968 "thread": "nvmf_tgt_poll_group_000" 00:13:56.968 } 00:13:56.968 ]' 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.968 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.227 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.201 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.767 00:13:58.767 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.767 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.767 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.026 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.026 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.026 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.026 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.026 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.026 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.026 { 00:13:59.026 "auth": { 00:13:59.026 "dhgroup": "ffdhe6144", 00:13:59.026 "digest": "sha256", 00:13:59.026 "state": "completed" 00:13:59.026 }, 00:13:59.026 "cntlid": 37, 00:13:59.026 "listen_address": { 00:13:59.026 "adrfam": "IPv4", 00:13:59.026 "traddr": "10.0.0.2", 00:13:59.026 "trsvcid": "4420", 00:13:59.026 "trtype": "TCP" 00:13:59.026 }, 00:13:59.026 "peer_address": { 00:13:59.026 "adrfam": "IPv4", 00:13:59.026 "traddr": "10.0.0.1", 00:13:59.026 "trsvcid": "32844", 00:13:59.026 "trtype": "TCP" 00:13:59.026 }, 00:13:59.026 "qid": 0, 00:13:59.026 "state": "enabled", 00:13:59.027 "thread": "nvmf_tgt_poll_group_000" 00:13:59.027 } 00:13:59.027 ]' 00:13:59.027 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.027 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.027 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.027 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.027 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.285 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.285 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.285 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.544 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:00.111 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.369 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.934 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.934 { 00:14:00.934 "auth": { 00:14:00.934 "dhgroup": "ffdhe6144", 00:14:00.934 "digest": "sha256", 00:14:00.934 "state": "completed" 00:14:00.934 }, 00:14:00.934 "cntlid": 39, 00:14:00.934 "listen_address": { 00:14:00.934 "adrfam": "IPv4", 00:14:00.934 "traddr": "10.0.0.2", 00:14:00.934 "trsvcid": "4420", 00:14:00.934 "trtype": "TCP" 00:14:00.934 }, 00:14:00.934 "peer_address": { 00:14:00.934 "adrfam": "IPv4", 00:14:00.934 "traddr": "10.0.0.1", 00:14:00.934 "trsvcid": "32860", 00:14:00.934 "trtype": "TCP" 00:14:00.934 }, 00:14:00.934 "qid": 0, 00:14:00.934 "state": "enabled", 00:14:00.934 "thread": "nvmf_tgt_poll_group_000" 00:14:00.934 } 00:14:00.934 ]' 00:14:00.934 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:01.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.192 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.454 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:02.020 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.278 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.844 00:14:02.845 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.845 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.845 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.103 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.103 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.103 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.103 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.103 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.103 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.103 { 00:14:03.103 "auth": { 00:14:03.103 "dhgroup": "ffdhe8192", 00:14:03.103 "digest": "sha256", 00:14:03.103 "state": "completed" 00:14:03.103 }, 00:14:03.103 "cntlid": 41, 00:14:03.103 "listen_address": { 00:14:03.103 "adrfam": "IPv4", 00:14:03.103 "traddr": "10.0.0.2", 00:14:03.103 "trsvcid": "4420", 00:14:03.103 "trtype": "TCP" 00:14:03.103 }, 00:14:03.103 "peer_address": { 00:14:03.103 "adrfam": "IPv4", 00:14:03.103 "traddr": "10.0.0.1", 00:14:03.103 "trsvcid": "32880", 00:14:03.103 "trtype": "TCP" 00:14:03.103 }, 00:14:03.103 "qid": 0, 00:14:03.103 "state": "enabled", 00:14:03.103 "thread": "nvmf_tgt_poll_group_000" 00:14:03.103 } 00:14:03.103 ]' 00:14:03.103 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.363 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.363 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.363 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:03.363 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.363 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.363 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.363 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.621 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.195 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.453 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.043 00:14:05.043 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.043 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.043 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.327 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.327 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.327 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.327 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.327 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.327 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.327 { 00:14:05.327 "auth": { 00:14:05.327 "dhgroup": "ffdhe8192", 00:14:05.327 "digest": "sha256", 00:14:05.327 "state": "completed" 00:14:05.327 }, 00:14:05.327 "cntlid": 43, 00:14:05.327 "listen_address": { 00:14:05.327 "adrfam": "IPv4", 00:14:05.327 "traddr": "10.0.0.2", 00:14:05.327 "trsvcid": "4420", 00:14:05.327 "trtype": "TCP" 00:14:05.327 }, 00:14:05.327 "peer_address": { 00:14:05.327 "adrfam": "IPv4", 00:14:05.327 "traddr": "10.0.0.1", 00:14:05.327 "trsvcid": "32906", 00:14:05.327 "trtype": "TCP" 00:14:05.327 }, 00:14:05.327 "qid": 0, 00:14:05.327 "state": "enabled", 00:14:05.327 "thread": "nvmf_tgt_poll_group_000" 00:14:05.327 } 00:14:05.327 ]' 00:14:05.327 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.327 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.327 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.584 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.584 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.584 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.584 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.584 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.842 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:14:06.406 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.406 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:06.406 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.406 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.407 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.407 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.407 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.407 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.664 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.231 00:14:07.231 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.231 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.231 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.489 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.489 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.489 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.489 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.747 { 00:14:07.747 "auth": { 00:14:07.747 "dhgroup": "ffdhe8192", 00:14:07.747 "digest": "sha256", 00:14:07.747 "state": "completed" 00:14:07.747 }, 00:14:07.747 "cntlid": 45, 00:14:07.747 "listen_address": { 00:14:07.747 "adrfam": "IPv4", 00:14:07.747 "traddr": "10.0.0.2", 00:14:07.747 "trsvcid": "4420", 00:14:07.747 "trtype": "TCP" 00:14:07.747 }, 00:14:07.747 "peer_address": { 00:14:07.747 "adrfam": "IPv4", 00:14:07.747 "traddr": "10.0.0.1", 00:14:07.747 "trsvcid": "43520", 00:14:07.747 "trtype": "TCP" 00:14:07.747 }, 00:14:07.747 "qid": 0, 00:14:07.747 "state": "enabled", 00:14:07.747 "thread": "nvmf_tgt_poll_group_000" 00:14:07.747 } 00:14:07.747 ]' 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.747 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.005 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:08.572 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.832 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.432 00:14:09.432 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.432 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.432 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.690 { 00:14:09.690 "auth": { 00:14:09.690 "dhgroup": "ffdhe8192", 00:14:09.690 "digest": "sha256", 00:14:09.690 "state": "completed" 00:14:09.690 }, 00:14:09.690 "cntlid": 47, 00:14:09.690 "listen_address": { 00:14:09.690 "adrfam": "IPv4", 00:14:09.690 "traddr": "10.0.0.2", 00:14:09.690 "trsvcid": "4420", 00:14:09.690 "trtype": "TCP" 00:14:09.690 }, 00:14:09.690 "peer_address": { 00:14:09.690 "adrfam": "IPv4", 00:14:09.690 "traddr": "10.0.0.1", 00:14:09.690 "trsvcid": "43536", 00:14:09.690 "trtype": "TCP" 00:14:09.690 }, 00:14:09.690 "qid": 0, 00:14:09.690 "state": "enabled", 00:14:09.690 "thread": "nvmf_tgt_poll_group_000" 00:14:09.690 } 00:14:09.690 ]' 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.690 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.947 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:09.948 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.948 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.948 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.948 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.206 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:10.776 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.035 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.293 00:14:11.293 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.293 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.293 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.858 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.858 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.858 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.858 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.858 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.858 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.858 { 00:14:11.858 "auth": { 00:14:11.858 "dhgroup": "null", 00:14:11.858 "digest": "sha384", 00:14:11.858 "state": "completed" 00:14:11.858 }, 00:14:11.858 "cntlid": 49, 00:14:11.858 "listen_address": { 00:14:11.858 "adrfam": "IPv4", 00:14:11.858 "traddr": "10.0.0.2", 00:14:11.859 "trsvcid": "4420", 00:14:11.859 "trtype": "TCP" 00:14:11.859 }, 00:14:11.859 "peer_address": { 00:14:11.859 "adrfam": "IPv4", 00:14:11.859 "traddr": "10.0.0.1", 00:14:11.859 "trsvcid": "43568", 00:14:11.859 "trtype": "TCP" 00:14:11.859 }, 00:14:11.859 "qid": 0, 00:14:11.859 "state": "enabled", 00:14:11.859 "thread": "nvmf_tgt_poll_group_000" 00:14:11.859 } 00:14:11.859 ]' 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.859 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.116 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:12.684 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.942 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.201 00:14:13.201 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.201 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.201 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.459 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.459 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.459 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.459 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.459 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.459 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.459 { 00:14:13.459 "auth": { 00:14:13.459 "dhgroup": "null", 00:14:13.459 "digest": "sha384", 00:14:13.459 "state": "completed" 00:14:13.459 }, 00:14:13.459 "cntlid": 51, 00:14:13.459 "listen_address": { 00:14:13.459 "adrfam": "IPv4", 00:14:13.459 "traddr": "10.0.0.2", 00:14:13.459 "trsvcid": "4420", 00:14:13.459 "trtype": "TCP" 00:14:13.459 }, 00:14:13.459 "peer_address": { 00:14:13.459 "adrfam": "IPv4", 00:14:13.459 "traddr": "10.0.0.1", 00:14:13.459 "trsvcid": "43600", 00:14:13.459 "trtype": "TCP" 00:14:13.459 }, 00:14:13.459 "qid": 0, 00:14:13.459 "state": "enabled", 00:14:13.459 "thread": "nvmf_tgt_poll_group_000" 00:14:13.459 } 00:14:13.460 ]' 00:14:13.460 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.460 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.460 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.717 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:13.717 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.717 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.717 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.717 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.975 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:14.541 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.799 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.057 00:14:15.057 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.057 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.057 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.315 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.316 { 00:14:15.316 "auth": { 00:14:15.316 "dhgroup": "null", 00:14:15.316 "digest": "sha384", 00:14:15.316 "state": "completed" 00:14:15.316 }, 00:14:15.316 "cntlid": 53, 00:14:15.316 "listen_address": { 00:14:15.316 "adrfam": "IPv4", 00:14:15.316 "traddr": "10.0.0.2", 00:14:15.316 "trsvcid": "4420", 00:14:15.316 "trtype": "TCP" 00:14:15.316 }, 00:14:15.316 "peer_address": { 00:14:15.316 "adrfam": "IPv4", 00:14:15.316 "traddr": "10.0.0.1", 00:14:15.316 "trsvcid": "43620", 00:14:15.316 "trtype": "TCP" 00:14:15.316 }, 00:14:15.316 "qid": 0, 00:14:15.316 "state": "enabled", 00:14:15.316 "thread": "nvmf_tgt_poll_group_000" 00:14:15.316 } 00:14:15.316 ]' 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.316 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.316 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:15.316 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.316 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.316 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.316 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.882 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:16.448 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.449 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:16.449 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.449 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.449 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.449 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.449 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:16.449 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.707 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.965 00:14:16.965 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.965 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.965 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.223 { 00:14:17.223 "auth": { 00:14:17.223 "dhgroup": "null", 00:14:17.223 "digest": "sha384", 00:14:17.223 "state": "completed" 00:14:17.223 }, 00:14:17.223 "cntlid": 55, 00:14:17.223 "listen_address": { 00:14:17.223 "adrfam": "IPv4", 00:14:17.223 "traddr": "10.0.0.2", 00:14:17.223 "trsvcid": "4420", 00:14:17.223 "trtype": "TCP" 00:14:17.223 }, 00:14:17.223 "peer_address": { 00:14:17.223 "adrfam": "IPv4", 00:14:17.223 "traddr": "10.0.0.1", 00:14:17.223 "trsvcid": "54110", 00:14:17.223 "trtype": "TCP" 00:14:17.223 }, 00:14:17.223 "qid": 0, 00:14:17.223 "state": "enabled", 00:14:17.223 "thread": "nvmf_tgt_poll_group_000" 00:14:17.223 } 00:14:17.223 ]' 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.223 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.481 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:18.415 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.415 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.673 00:14:18.673 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.673 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.673 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.931 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.931 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.931 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.931 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.931 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.931 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.931 { 00:14:18.931 "auth": { 00:14:18.931 "dhgroup": "ffdhe2048", 00:14:18.931 "digest": "sha384", 00:14:18.931 "state": "completed" 00:14:18.931 }, 00:14:18.931 "cntlid": 57, 00:14:18.931 "listen_address": { 00:14:18.931 "adrfam": "IPv4", 00:14:18.931 "traddr": "10.0.0.2", 00:14:18.931 "trsvcid": "4420", 00:14:18.931 "trtype": "TCP" 00:14:18.931 }, 00:14:18.931 "peer_address": { 00:14:18.931 "adrfam": "IPv4", 00:14:18.931 "traddr": "10.0.0.1", 00:14:18.931 "trsvcid": "54130", 00:14:18.931 "trtype": "TCP" 00:14:18.931 }, 00:14:18.931 "qid": 0, 00:14:18.931 "state": "enabled", 00:14:18.931 "thread": "nvmf_tgt_poll_group_000" 00:14:18.931 } 00:14:18.931 ]' 00:14:18.931 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.190 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.190 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.190 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.190 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.190 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.190 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.190 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.448 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:20.014 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.273 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.531 00:14:20.788 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.788 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.788 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.046 { 00:14:21.046 "auth": { 00:14:21.046 "dhgroup": "ffdhe2048", 00:14:21.046 "digest": "sha384", 00:14:21.046 "state": "completed" 00:14:21.046 }, 00:14:21.046 "cntlid": 59, 00:14:21.046 "listen_address": { 00:14:21.046 "adrfam": "IPv4", 00:14:21.046 "traddr": "10.0.0.2", 00:14:21.046 "trsvcid": "4420", 00:14:21.046 "trtype": "TCP" 00:14:21.046 }, 00:14:21.046 "peer_address": { 00:14:21.046 "adrfam": "IPv4", 00:14:21.046 "traddr": "10.0.0.1", 00:14:21.046 "trsvcid": "54146", 00:14:21.046 "trtype": "TCP" 00:14:21.046 }, 00:14:21.046 "qid": 0, 00:14:21.046 "state": "enabled", 00:14:21.046 "thread": "nvmf_tgt_poll_group_000" 00:14:21.046 } 00:14:21.046 ]' 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.046 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.303 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.238 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.497 00:14:22.497 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.497 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.497 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.755 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.755 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.755 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.755 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.013 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.013 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.013 { 00:14:23.013 "auth": { 00:14:23.013 "dhgroup": "ffdhe2048", 00:14:23.013 "digest": "sha384", 00:14:23.013 "state": "completed" 00:14:23.013 }, 00:14:23.013 "cntlid": 61, 00:14:23.013 "listen_address": { 00:14:23.013 "adrfam": "IPv4", 00:14:23.013 "traddr": "10.0.0.2", 00:14:23.013 "trsvcid": "4420", 00:14:23.013 "trtype": "TCP" 00:14:23.014 }, 00:14:23.014 "peer_address": { 00:14:23.014 "adrfam": "IPv4", 00:14:23.014 "traddr": "10.0.0.1", 00:14:23.014 "trsvcid": "54174", 00:14:23.014 "trtype": "TCP" 00:14:23.014 }, 00:14:23.014 "qid": 0, 00:14:23.014 "state": "enabled", 00:14:23.014 "thread": "nvmf_tgt_poll_group_000" 00:14:23.014 } 00:14:23.014 ]' 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.014 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.271 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:23.852 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.852 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:23.852 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.852 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.852 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.852 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.853 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:23.853 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.110 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.368 00:14:24.625 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.625 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.625 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.883 { 00:14:24.883 "auth": { 00:14:24.883 "dhgroup": "ffdhe2048", 00:14:24.883 "digest": "sha384", 00:14:24.883 "state": "completed" 00:14:24.883 }, 00:14:24.883 "cntlid": 63, 00:14:24.883 "listen_address": { 00:14:24.883 "adrfam": "IPv4", 00:14:24.883 "traddr": "10.0.0.2", 00:14:24.883 "trsvcid": "4420", 00:14:24.883 "trtype": "TCP" 00:14:24.883 }, 00:14:24.883 "peer_address": { 00:14:24.883 "adrfam": "IPv4", 00:14:24.883 "traddr": "10.0.0.1", 00:14:24.883 "trsvcid": "54210", 00:14:24.883 "trtype": "TCP" 00:14:24.883 }, 00:14:24.883 "qid": 0, 00:14:24.883 "state": "enabled", 00:14:24.883 "thread": "nvmf_tgt_poll_group_000" 00:14:24.883 } 00:14:24.883 ]' 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.883 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.141 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.141 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.141 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.399 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:25.964 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.222 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.795 00:14:26.795 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.795 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.795 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.053 { 00:14:27.053 "auth": { 00:14:27.053 "dhgroup": "ffdhe3072", 00:14:27.053 "digest": "sha384", 00:14:27.053 "state": "completed" 00:14:27.053 }, 00:14:27.053 "cntlid": 65, 00:14:27.053 "listen_address": { 00:14:27.053 "adrfam": "IPv4", 00:14:27.053 "traddr": "10.0.0.2", 00:14:27.053 "trsvcid": "4420", 00:14:27.053 "trtype": "TCP" 00:14:27.053 }, 00:14:27.053 "peer_address": { 00:14:27.053 "adrfam": "IPv4", 00:14:27.053 "traddr": "10.0.0.1", 00:14:27.053 "trsvcid": "58940", 00:14:27.053 "trtype": "TCP" 00:14:27.053 }, 00:14:27.053 "qid": 0, 00:14:27.053 "state": "enabled", 00:14:27.053 "thread": "nvmf_tgt_poll_group_000" 00:14:27.053 } 00:14:27.053 ]' 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.053 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.312 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.245 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.812 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.812 { 00:14:28.812 "auth": { 00:14:28.812 "dhgroup": "ffdhe3072", 00:14:28.812 "digest": "sha384", 00:14:28.812 "state": "completed" 00:14:28.812 }, 00:14:28.812 "cntlid": 67, 00:14:28.812 "listen_address": { 00:14:28.812 "adrfam": "IPv4", 00:14:28.812 "traddr": "10.0.0.2", 00:14:28.812 "trsvcid": "4420", 00:14:28.812 "trtype": "TCP" 00:14:28.812 }, 00:14:28.812 "peer_address": { 00:14:28.812 "adrfam": "IPv4", 00:14:28.812 "traddr": "10.0.0.1", 00:14:28.812 "trsvcid": "58960", 00:14:28.812 "trtype": "TCP" 00:14:28.812 }, 00:14:28.812 "qid": 0, 00:14:28.812 "state": "enabled", 00:14:28.812 "thread": "nvmf_tgt_poll_group_000" 00:14:28.812 } 00:14:28.812 ]' 00:14:28.812 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.070 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.070 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.070 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.070 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.070 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.070 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.070 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.328 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.896 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.154 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.722 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.723 { 00:14:30.723 "auth": { 00:14:30.723 "dhgroup": "ffdhe3072", 00:14:30.723 "digest": "sha384", 00:14:30.723 "state": "completed" 00:14:30.723 }, 00:14:30.723 "cntlid": 69, 00:14:30.723 "listen_address": { 00:14:30.723 "adrfam": "IPv4", 00:14:30.723 "traddr": "10.0.0.2", 00:14:30.723 "trsvcid": "4420", 00:14:30.723 "trtype": "TCP" 00:14:30.723 }, 00:14:30.723 "peer_address": { 00:14:30.723 "adrfam": "IPv4", 00:14:30.723 "traddr": "10.0.0.1", 00:14:30.723 "trsvcid": "59002", 00:14:30.723 "trtype": "TCP" 00:14:30.723 }, 00:14:30.723 "qid": 0, 00:14:30.723 "state": "enabled", 00:14:30.723 "thread": "nvmf_tgt_poll_group_000" 00:14:30.723 } 00:14:30.723 ]' 00:14:30.723 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.979 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.979 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.979 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.979 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.979 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.979 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.979 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.235 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.799 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.057 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.626 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.626 { 00:14:32.626 "auth": { 00:14:32.626 "dhgroup": "ffdhe3072", 00:14:32.626 "digest": "sha384", 00:14:32.626 "state": "completed" 00:14:32.626 }, 00:14:32.626 "cntlid": 71, 00:14:32.626 "listen_address": { 00:14:32.626 "adrfam": "IPv4", 00:14:32.626 "traddr": "10.0.0.2", 00:14:32.626 "trsvcid": "4420", 00:14:32.626 "trtype": "TCP" 00:14:32.626 }, 00:14:32.626 "peer_address": { 00:14:32.626 "adrfam": "IPv4", 00:14:32.626 "traddr": "10.0.0.1", 00:14:32.626 "trsvcid": "59014", 00:14:32.626 "trtype": "TCP" 00:14:32.626 }, 00:14:32.626 "qid": 0, 00:14:32.626 "state": "enabled", 00:14:32.626 "thread": "nvmf_tgt_poll_group_000" 00:14:32.626 } 00:14:32.626 ]' 00:14:32.626 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.884 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.884 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.884 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.884 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.884 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.884 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.884 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.141 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:33.707 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.965 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.532 00:14:34.532 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.532 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.532 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.790 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.790 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.790 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.791 { 00:14:34.791 "auth": { 00:14:34.791 "dhgroup": "ffdhe4096", 00:14:34.791 "digest": "sha384", 00:14:34.791 "state": "completed" 00:14:34.791 }, 00:14:34.791 "cntlid": 73, 00:14:34.791 "listen_address": { 00:14:34.791 "adrfam": "IPv4", 00:14:34.791 "traddr": "10.0.0.2", 00:14:34.791 "trsvcid": "4420", 00:14:34.791 "trtype": "TCP" 00:14:34.791 }, 00:14:34.791 "peer_address": { 00:14:34.791 "adrfam": "IPv4", 00:14:34.791 "traddr": "10.0.0.1", 00:14:34.791 "trsvcid": "59046", 00:14:34.791 "trtype": "TCP" 00:14:34.791 }, 00:14:34.791 "qid": 0, 00:14:34.791 "state": "enabled", 00:14:34.791 "thread": "nvmf_tgt_poll_group_000" 00:14:34.791 } 00:14:34.791 ]' 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.791 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.057 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.989 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.556 00:14:36.556 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.556 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.556 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.815 { 00:14:36.815 "auth": { 00:14:36.815 "dhgroup": "ffdhe4096", 00:14:36.815 "digest": "sha384", 00:14:36.815 "state": "completed" 00:14:36.815 }, 00:14:36.815 "cntlid": 75, 00:14:36.815 "listen_address": { 00:14:36.815 "adrfam": "IPv4", 00:14:36.815 "traddr": "10.0.0.2", 00:14:36.815 "trsvcid": "4420", 00:14:36.815 "trtype": "TCP" 00:14:36.815 }, 00:14:36.815 "peer_address": { 00:14:36.815 "adrfam": "IPv4", 00:14:36.815 "traddr": "10.0.0.1", 00:14:36.815 "trsvcid": "49808", 00:14:36.815 "trtype": "TCP" 00:14:36.815 }, 00:14:36.815 "qid": 0, 00:14:36.815 "state": "enabled", 00:14:36.815 "thread": "nvmf_tgt_poll_group_000" 00:14:36.815 } 00:14:36.815 ]' 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:36.815 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.073 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.073 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.073 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.331 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:37.898 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.156 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.414 00:14:38.414 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.414 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.414 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.672 { 00:14:38.672 "auth": { 00:14:38.672 "dhgroup": "ffdhe4096", 00:14:38.672 "digest": "sha384", 00:14:38.672 "state": "completed" 00:14:38.672 }, 00:14:38.672 "cntlid": 77, 00:14:38.672 "listen_address": { 00:14:38.672 "adrfam": "IPv4", 00:14:38.672 "traddr": "10.0.0.2", 00:14:38.672 "trsvcid": "4420", 00:14:38.672 "trtype": "TCP" 00:14:38.672 }, 00:14:38.672 "peer_address": { 00:14:38.672 "adrfam": "IPv4", 00:14:38.672 "traddr": "10.0.0.1", 00:14:38.672 "trsvcid": "49832", 00:14:38.672 "trtype": "TCP" 00:14:38.672 }, 00:14:38.672 "qid": 0, 00:14:38.672 "state": "enabled", 00:14:38.672 "thread": "nvmf_tgt_poll_group_000" 00:14:38.672 } 00:14:38.672 ]' 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.672 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.237 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:39.806 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.071 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.340 00:14:40.340 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.340 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.340 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.609 { 00:14:40.609 "auth": { 00:14:40.609 "dhgroup": "ffdhe4096", 00:14:40.609 "digest": "sha384", 00:14:40.609 "state": "completed" 00:14:40.609 }, 00:14:40.609 "cntlid": 79, 00:14:40.609 "listen_address": { 00:14:40.609 "adrfam": "IPv4", 00:14:40.609 "traddr": "10.0.0.2", 00:14:40.609 "trsvcid": "4420", 00:14:40.609 "trtype": "TCP" 00:14:40.609 }, 00:14:40.609 "peer_address": { 00:14:40.609 "adrfam": "IPv4", 00:14:40.609 "traddr": "10.0.0.1", 00:14:40.609 "trsvcid": "49860", 00:14:40.609 "trtype": "TCP" 00:14:40.609 }, 00:14:40.609 "qid": 0, 00:14:40.609 "state": "enabled", 00:14:40.609 "thread": "nvmf_tgt_poll_group_000" 00:14:40.609 } 00:14:40.609 ]' 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.609 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.879 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.879 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.879 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.879 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.846 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.847 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.416 00:14:42.416 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.416 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.416 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.676 { 00:14:42.676 "auth": { 00:14:42.676 "dhgroup": "ffdhe6144", 00:14:42.676 "digest": "sha384", 00:14:42.676 "state": "completed" 00:14:42.676 }, 00:14:42.676 "cntlid": 81, 00:14:42.676 "listen_address": { 00:14:42.676 "adrfam": "IPv4", 00:14:42.676 "traddr": "10.0.0.2", 00:14:42.676 "trsvcid": "4420", 00:14:42.676 "trtype": "TCP" 00:14:42.676 }, 00:14:42.676 "peer_address": { 00:14:42.676 "adrfam": "IPv4", 00:14:42.676 "traddr": "10.0.0.1", 00:14:42.676 "trsvcid": "49882", 00:14:42.676 "trtype": "TCP" 00:14:42.676 }, 00:14:42.676 "qid": 0, 00:14:42.676 "state": "enabled", 00:14:42.676 "thread": "nvmf_tgt_poll_group_000" 00:14:42.676 } 00:14:42.676 ]' 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.676 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.935 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.872 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.442 00:14:44.442 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.442 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.442 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.702 { 00:14:44.702 "auth": { 00:14:44.702 "dhgroup": "ffdhe6144", 00:14:44.702 "digest": "sha384", 00:14:44.702 "state": "completed" 00:14:44.702 }, 00:14:44.702 "cntlid": 83, 00:14:44.702 "listen_address": { 00:14:44.702 "adrfam": "IPv4", 00:14:44.702 "traddr": "10.0.0.2", 00:14:44.702 "trsvcid": "4420", 00:14:44.702 "trtype": "TCP" 00:14:44.702 }, 00:14:44.702 "peer_address": { 00:14:44.702 "adrfam": "IPv4", 00:14:44.702 "traddr": "10.0.0.1", 00:14:44.702 "trsvcid": "49900", 00:14:44.702 "trtype": "TCP" 00:14:44.702 }, 00:14:44.702 "qid": 0, 00:14:44.702 "state": "enabled", 00:14:44.702 "thread": "nvmf_tgt_poll_group_000" 00:14:44.702 } 00:14:44.702 ]' 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.702 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.961 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.900 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.469 00:14:46.469 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.469 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.469 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.728 { 00:14:46.728 "auth": { 00:14:46.728 "dhgroup": "ffdhe6144", 00:14:46.728 "digest": "sha384", 00:14:46.728 "state": "completed" 00:14:46.728 }, 00:14:46.728 "cntlid": 85, 00:14:46.728 "listen_address": { 00:14:46.728 "adrfam": "IPv4", 00:14:46.728 "traddr": "10.0.0.2", 00:14:46.728 "trsvcid": "4420", 00:14:46.728 "trtype": "TCP" 00:14:46.728 }, 00:14:46.728 "peer_address": { 00:14:46.728 "adrfam": "IPv4", 00:14:46.728 "traddr": "10.0.0.1", 00:14:46.728 "trsvcid": "39312", 00:14:46.728 "trtype": "TCP" 00:14:46.728 }, 00:14:46.728 "qid": 0, 00:14:46.728 "state": "enabled", 00:14:46.728 "thread": "nvmf_tgt_poll_group_000" 00:14:46.728 } 00:14:46.728 ]' 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.728 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.294 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:47.861 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.118 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.685 00:14:48.685 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.685 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.685 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.943 { 00:14:48.943 "auth": { 00:14:48.943 "dhgroup": "ffdhe6144", 00:14:48.943 "digest": "sha384", 00:14:48.943 "state": "completed" 00:14:48.943 }, 00:14:48.943 "cntlid": 87, 00:14:48.943 "listen_address": { 00:14:48.943 "adrfam": "IPv4", 00:14:48.943 "traddr": "10.0.0.2", 00:14:48.943 "trsvcid": "4420", 00:14:48.943 "trtype": "TCP" 00:14:48.943 }, 00:14:48.943 "peer_address": { 00:14:48.943 "adrfam": "IPv4", 00:14:48.943 "traddr": "10.0.0.1", 00:14:48.943 "trsvcid": "39360", 00:14:48.943 "trtype": "TCP" 00:14:48.943 }, 00:14:48.943 "qid": 0, 00:14:48.943 "state": "enabled", 00:14:48.943 "thread": "nvmf_tgt_poll_group_000" 00:14:48.943 } 00:14:48.943 ]' 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.943 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.510 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:50.076 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.334 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.901 00:14:50.901 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.901 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.901 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.160 { 00:14:51.160 "auth": { 00:14:51.160 "dhgroup": "ffdhe8192", 00:14:51.160 "digest": "sha384", 00:14:51.160 "state": "completed" 00:14:51.160 }, 00:14:51.160 "cntlid": 89, 00:14:51.160 "listen_address": { 00:14:51.160 "adrfam": "IPv4", 00:14:51.160 "traddr": "10.0.0.2", 00:14:51.160 "trsvcid": "4420", 00:14:51.160 "trtype": "TCP" 00:14:51.160 }, 00:14:51.160 "peer_address": { 00:14:51.160 "adrfam": "IPv4", 00:14:51.160 "traddr": "10.0.0.1", 00:14:51.160 "trsvcid": "39380", 00:14:51.160 "trtype": "TCP" 00:14:51.160 }, 00:14:51.160 "qid": 0, 00:14:51.160 "state": "enabled", 00:14:51.160 "thread": "nvmf_tgt_poll_group_000" 00:14:51.160 } 00:14:51.160 ]' 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.160 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.419 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.419 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.419 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.419 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.354 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.354 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.921 00:14:52.921 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.921 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.921 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.488 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.488 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.488 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.488 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.488 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.488 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.488 { 00:14:53.488 "auth": { 00:14:53.488 "dhgroup": "ffdhe8192", 00:14:53.488 "digest": "sha384", 00:14:53.488 "state": "completed" 00:14:53.488 }, 00:14:53.488 "cntlid": 91, 00:14:53.488 "listen_address": { 00:14:53.488 "adrfam": "IPv4", 00:14:53.488 "traddr": "10.0.0.2", 00:14:53.488 "trsvcid": "4420", 00:14:53.488 "trtype": "TCP" 00:14:53.488 }, 00:14:53.488 "peer_address": { 00:14:53.488 "adrfam": "IPv4", 00:14:53.488 "traddr": "10.0.0.1", 00:14:53.488 "trsvcid": "39394", 00:14:53.488 "trtype": "TCP" 00:14:53.488 }, 00:14:53.488 "qid": 0, 00:14:53.488 "state": "enabled", 00:14:53.489 "thread": "nvmf_tgt_poll_group_000" 00:14:53.489 } 00:14:53.489 ]' 00:14:53.489 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.489 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.489 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.489 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.489 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.489 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.489 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.489 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.748 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:14:54.317 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.317 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:54.317 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.317 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.317 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.317 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:54.317 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:54.576 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:54.576 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.576 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:54.576 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.577 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.146 00:14:55.146 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.146 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.146 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.406 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.406 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.406 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.406 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.406 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.666 { 00:14:55.666 "auth": { 00:14:55.666 "dhgroup": "ffdhe8192", 00:14:55.666 "digest": "sha384", 00:14:55.666 "state": "completed" 00:14:55.666 }, 00:14:55.666 "cntlid": 93, 00:14:55.666 "listen_address": { 00:14:55.666 "adrfam": "IPv4", 00:14:55.666 "traddr": "10.0.0.2", 00:14:55.666 "trsvcid": "4420", 00:14:55.666 "trtype": "TCP" 00:14:55.666 }, 00:14:55.666 "peer_address": { 00:14:55.666 "adrfam": "IPv4", 00:14:55.666 "traddr": "10.0.0.1", 00:14:55.666 "trsvcid": "39422", 00:14:55.666 "trtype": "TCP" 00:14:55.666 }, 00:14:55.666 "qid": 0, 00:14:55.666 "state": "enabled", 00:14:55.666 "thread": "nvmf_tgt_poll_group_000" 00:14:55.666 } 00:14:55.666 ]' 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.666 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.940 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:56.559 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.819 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.392 00:14:57.392 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.392 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.392 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.651 { 00:14:57.651 "auth": { 00:14:57.651 "dhgroup": "ffdhe8192", 00:14:57.651 "digest": "sha384", 00:14:57.651 "state": "completed" 00:14:57.651 }, 00:14:57.651 "cntlid": 95, 00:14:57.651 "listen_address": { 00:14:57.651 "adrfam": "IPv4", 00:14:57.651 "traddr": "10.0.0.2", 00:14:57.651 "trsvcid": "4420", 00:14:57.651 "trtype": "TCP" 00:14:57.651 }, 00:14:57.651 "peer_address": { 00:14:57.651 "adrfam": "IPv4", 00:14:57.651 "traddr": "10.0.0.1", 00:14:57.651 "trsvcid": "53204", 00:14:57.651 "trtype": "TCP" 00:14:57.651 }, 00:14:57.651 "qid": 0, 00:14:57.651 "state": "enabled", 00:14:57.651 "thread": "nvmf_tgt_poll_group_000" 00:14:57.651 } 00:14:57.651 ]' 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.651 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.909 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.909 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.910 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.910 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:58.845 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.105 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.364 00:14:59.364 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.364 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.364 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.624 { 00:14:59.624 "auth": { 00:14:59.624 "dhgroup": "null", 00:14:59.624 "digest": "sha512", 00:14:59.624 "state": "completed" 00:14:59.624 }, 00:14:59.624 "cntlid": 97, 00:14:59.624 "listen_address": { 00:14:59.624 "adrfam": "IPv4", 00:14:59.624 "traddr": "10.0.0.2", 00:14:59.624 "trsvcid": "4420", 00:14:59.624 "trtype": "TCP" 00:14:59.624 }, 00:14:59.624 "peer_address": { 00:14:59.624 "adrfam": "IPv4", 00:14:59.624 "traddr": "10.0.0.1", 00:14:59.624 "trsvcid": "53232", 00:14:59.624 "trtype": "TCP" 00:14:59.624 }, 00:14:59.624 "qid": 0, 00:14:59.624 "state": "enabled", 00:14:59.624 "thread": "nvmf_tgt_poll_group_000" 00:14:59.624 } 00:14:59.624 ]' 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.624 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.894 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.828 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.086 00:15:01.345 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.345 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.345 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.602 { 00:15:01.602 "auth": { 00:15:01.602 "dhgroup": "null", 00:15:01.602 "digest": "sha512", 00:15:01.602 "state": "completed" 00:15:01.602 }, 00:15:01.602 "cntlid": 99, 00:15:01.602 "listen_address": { 00:15:01.602 "adrfam": "IPv4", 00:15:01.602 "traddr": "10.0.0.2", 00:15:01.602 "trsvcid": "4420", 00:15:01.602 "trtype": "TCP" 00:15:01.602 }, 00:15:01.602 "peer_address": { 00:15:01.602 "adrfam": "IPv4", 00:15:01.602 "traddr": "10.0.0.1", 00:15:01.602 "trsvcid": "53256", 00:15:01.602 "trtype": "TCP" 00:15:01.602 }, 00:15:01.602 "qid": 0, 00:15:01.602 "state": "enabled", 00:15:01.602 "thread": "nvmf_tgt_poll_group_000" 00:15:01.602 } 00:15:01.602 ]' 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.602 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.861 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:15:02.431 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.691 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:02.691 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.691 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.691 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.691 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.691 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.691 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.950 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.209 00:15:03.209 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.209 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.209 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.468 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.469 { 00:15:03.469 "auth": { 00:15:03.469 "dhgroup": "null", 00:15:03.469 "digest": "sha512", 00:15:03.469 "state": "completed" 00:15:03.469 }, 00:15:03.469 "cntlid": 101, 00:15:03.469 "listen_address": { 00:15:03.469 "adrfam": "IPv4", 00:15:03.469 "traddr": "10.0.0.2", 00:15:03.469 "trsvcid": "4420", 00:15:03.469 "trtype": "TCP" 00:15:03.469 }, 00:15:03.469 "peer_address": { 00:15:03.469 "adrfam": "IPv4", 00:15:03.469 "traddr": "10.0.0.1", 00:15:03.469 "trsvcid": "53278", 00:15:03.469 "trtype": "TCP" 00:15:03.469 }, 00:15:03.469 "qid": 0, 00:15:03.469 "state": "enabled", 00:15:03.469 "thread": "nvmf_tgt_poll_group_000" 00:15:03.469 } 00:15:03.469 ]' 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.469 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.748 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.685 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.944 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.945 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.204 00:15:05.204 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.204 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.204 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.463 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.463 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.463 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.463 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.463 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.463 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.463 { 00:15:05.463 "auth": { 00:15:05.463 "dhgroup": "null", 00:15:05.463 "digest": "sha512", 00:15:05.463 "state": "completed" 00:15:05.463 }, 00:15:05.463 "cntlid": 103, 00:15:05.463 "listen_address": { 00:15:05.463 "adrfam": "IPv4", 00:15:05.463 "traddr": "10.0.0.2", 00:15:05.463 "trsvcid": "4420", 00:15:05.463 "trtype": "TCP" 00:15:05.463 }, 00:15:05.463 "peer_address": { 00:15:05.463 "adrfam": "IPv4", 00:15:05.463 "traddr": "10.0.0.1", 00:15:05.463 "trsvcid": "33874", 00:15:05.463 "trtype": "TCP" 00:15:05.463 }, 00:15:05.463 "qid": 0, 00:15:05.463 "state": "enabled", 00:15:05.463 "thread": "nvmf_tgt_poll_group_000" 00:15:05.463 } 00:15:05.463 ]' 00:15:05.463 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.722 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.722 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.722 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:05.722 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.722 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.722 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.722 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.982 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:06.549 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.808 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.376 00:15:07.376 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.376 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.376 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.376 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.376 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.637 { 00:15:07.637 "auth": { 00:15:07.637 "dhgroup": "ffdhe2048", 00:15:07.637 "digest": "sha512", 00:15:07.637 "state": "completed" 00:15:07.637 }, 00:15:07.637 "cntlid": 105, 00:15:07.637 "listen_address": { 00:15:07.637 "adrfam": "IPv4", 00:15:07.637 "traddr": "10.0.0.2", 00:15:07.637 "trsvcid": "4420", 00:15:07.637 "trtype": "TCP" 00:15:07.637 }, 00:15:07.637 "peer_address": { 00:15:07.637 "adrfam": "IPv4", 00:15:07.637 "traddr": "10.0.0.1", 00:15:07.637 "trsvcid": "33908", 00:15:07.637 "trtype": "TCP" 00:15:07.637 }, 00:15:07.637 "qid": 0, 00:15:07.637 "state": "enabled", 00:15:07.637 "thread": "nvmf_tgt_poll_group_000" 00:15:07.637 } 00:15:07.637 ]' 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.637 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.896 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:08.462 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.721 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.980 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.238 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.238 { 00:15:09.238 "auth": { 00:15:09.238 "dhgroup": "ffdhe2048", 00:15:09.238 "digest": "sha512", 00:15:09.238 "state": "completed" 00:15:09.238 }, 00:15:09.238 "cntlid": 107, 00:15:09.238 "listen_address": { 00:15:09.238 "adrfam": "IPv4", 00:15:09.238 "traddr": "10.0.0.2", 00:15:09.239 "trsvcid": "4420", 00:15:09.239 "trtype": "TCP" 00:15:09.239 }, 00:15:09.239 "peer_address": { 00:15:09.239 "adrfam": "IPv4", 00:15:09.239 "traddr": "10.0.0.1", 00:15:09.239 "trsvcid": "33948", 00:15:09.239 "trtype": "TCP" 00:15:09.239 }, 00:15:09.239 "qid": 0, 00:15:09.239 "state": "enabled", 00:15:09.239 "thread": "nvmf_tgt_poll_group_000" 00:15:09.239 } 00:15:09.239 ]' 00:15:09.239 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.498 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.498 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.498 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.498 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.498 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.498 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.498 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.755 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:10.322 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.888 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.145 00:15:11.145 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.146 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.146 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.409 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.409 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.409 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.409 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.409 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.409 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.409 { 00:15:11.409 "auth": { 00:15:11.409 "dhgroup": "ffdhe2048", 00:15:11.409 "digest": "sha512", 00:15:11.409 "state": "completed" 00:15:11.409 }, 00:15:11.409 "cntlid": 109, 00:15:11.409 "listen_address": { 00:15:11.409 "adrfam": "IPv4", 00:15:11.409 "traddr": "10.0.0.2", 00:15:11.409 "trsvcid": "4420", 00:15:11.409 "trtype": "TCP" 00:15:11.409 }, 00:15:11.409 "peer_address": { 00:15:11.409 "adrfam": "IPv4", 00:15:11.409 "traddr": "10.0.0.1", 00:15:11.409 "trsvcid": "33966", 00:15:11.409 "trtype": "TCP" 00:15:11.409 }, 00:15:11.409 "qid": 0, 00:15:11.409 "state": "enabled", 00:15:11.409 "thread": "nvmf_tgt_poll_group_000" 00:15:11.409 } 00:15:11.409 ]' 00:15:11.409 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.409 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.409 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.409 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.409 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.409 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.409 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.409 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.678 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:15:12.611 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:12.611 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.612 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.612 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.612 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.612 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.869 00:15:13.127 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.127 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.127 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.386 { 00:15:13.386 "auth": { 00:15:13.386 "dhgroup": "ffdhe2048", 00:15:13.386 "digest": "sha512", 00:15:13.386 "state": "completed" 00:15:13.386 }, 00:15:13.386 "cntlid": 111, 00:15:13.386 "listen_address": { 00:15:13.386 "adrfam": "IPv4", 00:15:13.386 "traddr": "10.0.0.2", 00:15:13.386 "trsvcid": "4420", 00:15:13.386 "trtype": "TCP" 00:15:13.386 }, 00:15:13.386 "peer_address": { 00:15:13.386 "adrfam": "IPv4", 00:15:13.386 "traddr": "10.0.0.1", 00:15:13.386 "trsvcid": "34002", 00:15:13.386 "trtype": "TCP" 00:15:13.386 }, 00:15:13.386 "qid": 0, 00:15:13.386 "state": "enabled", 00:15:13.386 "thread": "nvmf_tgt_poll_group_000" 00:15:13.386 } 00:15:13.386 ]' 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.386 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.386 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.386 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.386 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.386 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.386 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.644 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:15:14.580 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:14.580 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.581 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.581 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.581 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.581 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.581 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.581 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.172 00:15:15.172 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.172 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.172 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.430 { 00:15:15.430 "auth": { 00:15:15.430 "dhgroup": "ffdhe3072", 00:15:15.430 "digest": "sha512", 00:15:15.430 "state": "completed" 00:15:15.430 }, 00:15:15.430 "cntlid": 113, 00:15:15.430 "listen_address": { 00:15:15.430 "adrfam": "IPv4", 00:15:15.430 "traddr": "10.0.0.2", 00:15:15.430 "trsvcid": "4420", 00:15:15.430 "trtype": "TCP" 00:15:15.430 }, 00:15:15.430 "peer_address": { 00:15:15.430 "adrfam": "IPv4", 00:15:15.430 "traddr": "10.0.0.1", 00:15:15.430 "trsvcid": "34016", 00:15:15.430 "trtype": "TCP" 00:15:15.430 }, 00:15:15.430 "qid": 0, 00:15:15.430 "state": "enabled", 00:15:15.430 "thread": "nvmf_tgt_poll_group_000" 00:15:15.430 } 00:15:15.430 ]' 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.430 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.430 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.430 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.430 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.430 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.430 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.430 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.689 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:15:16.253 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.253 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:16.253 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.253 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.254 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.254 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.254 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:16.254 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.512 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.771 00:15:17.030 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.030 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.030 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.289 { 00:15:17.289 "auth": { 00:15:17.289 "dhgroup": "ffdhe3072", 00:15:17.289 "digest": "sha512", 00:15:17.289 "state": "completed" 00:15:17.289 }, 00:15:17.289 "cntlid": 115, 00:15:17.289 "listen_address": { 00:15:17.289 "adrfam": "IPv4", 00:15:17.289 "traddr": "10.0.0.2", 00:15:17.289 "trsvcid": "4420", 00:15:17.289 "trtype": "TCP" 00:15:17.289 }, 00:15:17.289 "peer_address": { 00:15:17.289 "adrfam": "IPv4", 00:15:17.289 "traddr": "10.0.0.1", 00:15:17.289 "trsvcid": "55464", 00:15:17.289 "trtype": "TCP" 00:15:17.289 }, 00:15:17.289 "qid": 0, 00:15:17.289 "state": "enabled", 00:15:17.289 "thread": "nvmf_tgt_poll_group_000" 00:15:17.289 } 00:15:17.289 ]' 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.289 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.549 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:18.115 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.374 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.636 00:15:18.905 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.905 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.905 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.166 { 00:15:19.166 "auth": { 00:15:19.166 "dhgroup": "ffdhe3072", 00:15:19.166 "digest": "sha512", 00:15:19.166 "state": "completed" 00:15:19.166 }, 00:15:19.166 "cntlid": 117, 00:15:19.166 "listen_address": { 00:15:19.166 "adrfam": "IPv4", 00:15:19.166 "traddr": "10.0.0.2", 00:15:19.166 "trsvcid": "4420", 00:15:19.166 "trtype": "TCP" 00:15:19.166 }, 00:15:19.166 "peer_address": { 00:15:19.166 "adrfam": "IPv4", 00:15:19.166 "traddr": "10.0.0.1", 00:15:19.166 "trsvcid": "55496", 00:15:19.166 "trtype": "TCP" 00:15:19.166 }, 00:15:19.166 "qid": 0, 00:15:19.166 "state": "enabled", 00:15:19.166 "thread": "nvmf_tgt_poll_group_000" 00:15:19.166 } 00:15:19.166 ]' 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.166 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.426 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.994 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.252 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.511 00:15:20.772 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.772 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.772 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.032 { 00:15:21.032 "auth": { 00:15:21.032 "dhgroup": "ffdhe3072", 00:15:21.032 "digest": "sha512", 00:15:21.032 "state": "completed" 00:15:21.032 }, 00:15:21.032 "cntlid": 119, 00:15:21.032 "listen_address": { 00:15:21.032 "adrfam": "IPv4", 00:15:21.032 "traddr": "10.0.0.2", 00:15:21.032 "trsvcid": "4420", 00:15:21.032 "trtype": "TCP" 00:15:21.032 }, 00:15:21.032 "peer_address": { 00:15:21.032 "adrfam": "IPv4", 00:15:21.032 "traddr": "10.0.0.1", 00:15:21.032 "trsvcid": "55520", 00:15:21.032 "trtype": "TCP" 00:15:21.032 }, 00:15:21.032 "qid": 0, 00:15:21.032 "state": "enabled", 00:15:21.032 "thread": "nvmf_tgt_poll_group_000" 00:15:21.032 } 00:15:21.032 ]' 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.032 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.291 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:21.883 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.158 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.727 00:15:22.727 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.728 { 00:15:22.728 "auth": { 00:15:22.728 "dhgroup": "ffdhe4096", 00:15:22.728 "digest": "sha512", 00:15:22.728 "state": "completed" 00:15:22.728 }, 00:15:22.728 "cntlid": 121, 00:15:22.728 "listen_address": { 00:15:22.728 "adrfam": "IPv4", 00:15:22.728 "traddr": "10.0.0.2", 00:15:22.728 "trsvcid": "4420", 00:15:22.728 "trtype": "TCP" 00:15:22.728 }, 00:15:22.728 "peer_address": { 00:15:22.728 "adrfam": "IPv4", 00:15:22.728 "traddr": "10.0.0.1", 00:15:22.728 "trsvcid": "55550", 00:15:22.728 "trtype": "TCP" 00:15:22.728 }, 00:15:22.728 "qid": 0, 00:15:22.728 "state": "enabled", 00:15:22.728 "thread": "nvmf_tgt_poll_group_000" 00:15:22.728 } 00:15:22.728 ]' 00:15:22.728 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.987 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.246 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:15:23.813 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.813 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:23.813 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.813 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.071 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.071 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.071 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.072 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.330 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.330 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.330 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.588 00:15:24.588 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.588 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.588 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.847 { 00:15:24.847 "auth": { 00:15:24.847 "dhgroup": "ffdhe4096", 00:15:24.847 "digest": "sha512", 00:15:24.847 "state": "completed" 00:15:24.847 }, 00:15:24.847 "cntlid": 123, 00:15:24.847 "listen_address": { 00:15:24.847 "adrfam": "IPv4", 00:15:24.847 "traddr": "10.0.0.2", 00:15:24.847 "trsvcid": "4420", 00:15:24.847 "trtype": "TCP" 00:15:24.847 }, 00:15:24.847 "peer_address": { 00:15:24.847 "adrfam": "IPv4", 00:15:24.847 "traddr": "10.0.0.1", 00:15:24.847 "trsvcid": "55570", 00:15:24.847 "trtype": "TCP" 00:15:24.847 }, 00:15:24.847 "qid": 0, 00:15:24.847 "state": "enabled", 00:15:24.847 "thread": "nvmf_tgt_poll_group_000" 00:15:24.847 } 00:15:24.847 ]' 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.847 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.106 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.106 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.106 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.364 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:25.952 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.211 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.470 00:15:26.729 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.729 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.729 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.987 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.988 { 00:15:26.988 "auth": { 00:15:26.988 "dhgroup": "ffdhe4096", 00:15:26.988 "digest": "sha512", 00:15:26.988 "state": "completed" 00:15:26.988 }, 00:15:26.988 "cntlid": 125, 00:15:26.988 "listen_address": { 00:15:26.988 "adrfam": "IPv4", 00:15:26.988 "traddr": "10.0.0.2", 00:15:26.988 "trsvcid": "4420", 00:15:26.988 "trtype": "TCP" 00:15:26.988 }, 00:15:26.988 "peer_address": { 00:15:26.988 "adrfam": "IPv4", 00:15:26.988 "traddr": "10.0.0.1", 00:15:26.988 "trsvcid": "33436", 00:15:26.988 "trtype": "TCP" 00:15:26.988 }, 00:15:26.988 "qid": 0, 00:15:26.988 "state": "enabled", 00:15:26.988 "thread": "nvmf_tgt_poll_group_000" 00:15:26.988 } 00:15:26.988 ]' 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.988 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.246 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.181 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.440 00:15:28.440 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.440 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.440 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.699 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.699 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.699 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.699 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.958 { 00:15:28.958 "auth": { 00:15:28.958 "dhgroup": "ffdhe4096", 00:15:28.958 "digest": "sha512", 00:15:28.958 "state": "completed" 00:15:28.958 }, 00:15:28.958 "cntlid": 127, 00:15:28.958 "listen_address": { 00:15:28.958 "adrfam": "IPv4", 00:15:28.958 "traddr": "10.0.0.2", 00:15:28.958 "trsvcid": "4420", 00:15:28.958 "trtype": "TCP" 00:15:28.958 }, 00:15:28.958 "peer_address": { 00:15:28.958 "adrfam": "IPv4", 00:15:28.958 "traddr": "10.0.0.1", 00:15:28.958 "trsvcid": "33468", 00:15:28.958 "trtype": "TCP" 00:15:28.958 }, 00:15:28.958 "qid": 0, 00:15:28.958 "state": "enabled", 00:15:28.958 "thread": "nvmf_tgt_poll_group_000" 00:15:28.958 } 00:15:28.958 ]' 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.958 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.216 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:29.783 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.042 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.637 00:15:30.637 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.637 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.637 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.895 { 00:15:30.895 "auth": { 00:15:30.895 "dhgroup": "ffdhe6144", 00:15:30.895 "digest": "sha512", 00:15:30.895 "state": "completed" 00:15:30.895 }, 00:15:30.895 "cntlid": 129, 00:15:30.895 "listen_address": { 00:15:30.895 "adrfam": "IPv4", 00:15:30.895 "traddr": "10.0.0.2", 00:15:30.895 "trsvcid": "4420", 00:15:30.895 "trtype": "TCP" 00:15:30.895 }, 00:15:30.895 "peer_address": { 00:15:30.895 "adrfam": "IPv4", 00:15:30.895 "traddr": "10.0.0.1", 00:15:30.895 "trsvcid": "33506", 00:15:30.895 "trtype": "TCP" 00:15:30.895 }, 00:15:30.895 "qid": 0, 00:15:30.895 "state": "enabled", 00:15:30.895 "thread": "nvmf_tgt_poll_group_000" 00:15:30.895 } 00:15:30.895 ]' 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.895 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.896 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.896 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.896 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.896 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.466 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:32.036 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.296 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.555 00:15:32.555 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.555 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.555 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.820 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.820 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.820 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.820 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.820 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.820 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.820 { 00:15:32.820 "auth": { 00:15:32.820 "dhgroup": "ffdhe6144", 00:15:32.820 "digest": "sha512", 00:15:32.820 "state": "completed" 00:15:32.821 }, 00:15:32.821 "cntlid": 131, 00:15:32.821 "listen_address": { 00:15:32.821 "adrfam": "IPv4", 00:15:32.821 "traddr": "10.0.0.2", 00:15:32.821 "trsvcid": "4420", 00:15:32.821 "trtype": "TCP" 00:15:32.821 }, 00:15:32.821 "peer_address": { 00:15:32.821 "adrfam": "IPv4", 00:15:32.821 "traddr": "10.0.0.1", 00:15:32.821 "trsvcid": "33530", 00:15:32.821 "trtype": "TCP" 00:15:32.821 }, 00:15:32.821 "qid": 0, 00:15:32.821 "state": "enabled", 00:15:32.821 "thread": "nvmf_tgt_poll_group_000" 00:15:32.821 } 00:15:32.821 ]' 00:15:32.821 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.821 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.821 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.087 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.087 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.087 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.087 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.087 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.346 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:33.914 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.185 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.444 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.704 { 00:15:34.704 "auth": { 00:15:34.704 "dhgroup": "ffdhe6144", 00:15:34.704 "digest": "sha512", 00:15:34.704 "state": "completed" 00:15:34.704 }, 00:15:34.704 "cntlid": 133, 00:15:34.704 "listen_address": { 00:15:34.704 "adrfam": "IPv4", 00:15:34.704 "traddr": "10.0.0.2", 00:15:34.704 "trsvcid": "4420", 00:15:34.704 "trtype": "TCP" 00:15:34.704 }, 00:15:34.704 "peer_address": { 00:15:34.704 "adrfam": "IPv4", 00:15:34.704 "traddr": "10.0.0.1", 00:15:34.704 "trsvcid": "33558", 00:15:34.704 "trtype": "TCP" 00:15:34.704 }, 00:15:34.704 "qid": 0, 00:15:34.704 "state": "enabled", 00:15:34.704 "thread": "nvmf_tgt_poll_group_000" 00:15:34.704 } 00:15:34.704 ]' 00:15:34.704 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.963 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.963 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.963 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.963 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.963 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.963 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.963 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.222 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:35.791 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:36.050 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:36.050 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.050 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:36.050 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:36.050 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:36.050 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.051 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:36.051 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.051 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.051 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.051 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.051 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.310 00:15:36.310 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.310 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.310 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.879 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.879 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.879 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.879 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.879 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.879 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.879 { 00:15:36.879 "auth": { 00:15:36.879 "dhgroup": "ffdhe6144", 00:15:36.879 "digest": "sha512", 00:15:36.879 "state": "completed" 00:15:36.879 }, 00:15:36.879 "cntlid": 135, 00:15:36.879 "listen_address": { 00:15:36.879 "adrfam": "IPv4", 00:15:36.879 "traddr": "10.0.0.2", 00:15:36.879 "trsvcid": "4420", 00:15:36.879 "trtype": "TCP" 00:15:36.879 }, 00:15:36.879 "peer_address": { 00:15:36.879 "adrfam": "IPv4", 00:15:36.879 "traddr": "10.0.0.1", 00:15:36.879 "trsvcid": "46386", 00:15:36.879 "trtype": "TCP" 00:15:36.879 }, 00:15:36.879 "qid": 0, 00:15:36.879 "state": "enabled", 00:15:36.879 "thread": "nvmf_tgt_poll_group_000" 00:15:36.880 } 00:15:36.880 ]' 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.880 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:15:37.709 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.968 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.535 00:15:38.794 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.794 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.794 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.055 { 00:15:39.055 "auth": { 00:15:39.055 "dhgroup": "ffdhe8192", 00:15:39.055 "digest": "sha512", 00:15:39.055 "state": "completed" 00:15:39.055 }, 00:15:39.055 "cntlid": 137, 00:15:39.055 "listen_address": { 00:15:39.055 "adrfam": "IPv4", 00:15:39.055 "traddr": "10.0.0.2", 00:15:39.055 "trsvcid": "4420", 00:15:39.055 "trtype": "TCP" 00:15:39.055 }, 00:15:39.055 "peer_address": { 00:15:39.055 "adrfam": "IPv4", 00:15:39.055 "traddr": "10.0.0.1", 00:15:39.055 "trsvcid": "46412", 00:15:39.055 "trtype": "TCP" 00:15:39.055 }, 00:15:39.055 "qid": 0, 00:15:39.055 "state": "enabled", 00:15:39.055 "thread": "nvmf_tgt_poll_group_000" 00:15:39.055 } 00:15:39.055 ]' 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.055 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.323 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:39.918 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.177 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.754 00:15:40.754 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.754 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.754 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.017 { 00:15:41.017 "auth": { 00:15:41.017 "dhgroup": "ffdhe8192", 00:15:41.017 "digest": "sha512", 00:15:41.017 "state": "completed" 00:15:41.017 }, 00:15:41.017 "cntlid": 139, 00:15:41.017 "listen_address": { 00:15:41.017 "adrfam": "IPv4", 00:15:41.017 "traddr": "10.0.0.2", 00:15:41.017 "trsvcid": "4420", 00:15:41.017 "trtype": "TCP" 00:15:41.017 }, 00:15:41.017 "peer_address": { 00:15:41.017 "adrfam": "IPv4", 00:15:41.017 "traddr": "10.0.0.1", 00:15:41.017 "trsvcid": "46452", 00:15:41.017 "trtype": "TCP" 00:15:41.017 }, 00:15:41.017 "qid": 0, 00:15:41.017 "state": "enabled", 00:15:41.017 "thread": "nvmf_tgt_poll_group_000" 00:15:41.017 } 00:15:41.017 ]' 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.017 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.274 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.274 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.274 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.274 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.274 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.533 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:01:Mjk5YThmZWZlYzZlOTdmNTM0NzQzOTA4MDI1ZGFlZWMQFETY: --dhchap-ctrl-secret DHHC-1:02:NTNlOGZkNjc5ODQ2MDcyY2YzMzY4Y2ZmYTNmODFhZDg5MzM5MzY4OWM3NGYzMjY5dy7MRw==: 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:42.103 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.362 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.362 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.362 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.362 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.931 00:15:42.931 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.931 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.931 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.191 { 00:15:43.191 "auth": { 00:15:43.191 "dhgroup": "ffdhe8192", 00:15:43.191 "digest": "sha512", 00:15:43.191 "state": "completed" 00:15:43.191 }, 00:15:43.191 "cntlid": 141, 00:15:43.191 "listen_address": { 00:15:43.191 "adrfam": "IPv4", 00:15:43.191 "traddr": "10.0.0.2", 00:15:43.191 "trsvcid": "4420", 00:15:43.191 "trtype": "TCP" 00:15:43.191 }, 00:15:43.191 "peer_address": { 00:15:43.191 "adrfam": "IPv4", 00:15:43.191 "traddr": "10.0.0.1", 00:15:43.191 "trsvcid": "46484", 00:15:43.191 "trtype": "TCP" 00:15:43.191 }, 00:15:43.191 "qid": 0, 00:15:43.191 "state": "enabled", 00:15:43.191 "thread": "nvmf_tgt_poll_group_000" 00:15:43.191 } 00:15:43.191 ]' 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.191 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.451 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.451 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.451 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.451 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.451 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.710 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:02:MTMxYmNmYjZmMWRlZDNmNzUzZGNjZWE4MzQwYjllNTkwNzYxZjM2OTJiOTZmZDY5o6kgOQ==: --dhchap-ctrl-secret DHHC-1:01:NGRiMDI0ZjEwOTM2NGE2MDkzNDQ4ZTVmN2Y2MmIwZWHHbEax: 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.277 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.536 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.103 00:15:45.103 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.103 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.103 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.362 { 00:15:45.362 "auth": { 00:15:45.362 "dhgroup": "ffdhe8192", 00:15:45.362 "digest": "sha512", 00:15:45.362 "state": "completed" 00:15:45.362 }, 00:15:45.362 "cntlid": 143, 00:15:45.362 "listen_address": { 00:15:45.362 "adrfam": "IPv4", 00:15:45.362 "traddr": "10.0.0.2", 00:15:45.362 "trsvcid": "4420", 00:15:45.362 "trtype": "TCP" 00:15:45.362 }, 00:15:45.362 "peer_address": { 00:15:45.362 "adrfam": "IPv4", 00:15:45.362 "traddr": "10.0.0.1", 00:15:45.362 "trsvcid": "46502", 00:15:45.362 "trtype": "TCP" 00:15:45.362 }, 00:15:45.362 "qid": 0, 00:15:45.362 "state": "enabled", 00:15:45.362 "thread": "nvmf_tgt_poll_group_000" 00:15:45.362 } 00:15:45.362 ]' 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.362 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.362 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.362 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.362 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.362 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.362 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.621 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:15:46.189 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:46.190 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.448 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.018 00:15:47.018 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.018 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.018 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.279 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.279 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.279 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.279 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.279 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.279 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.279 { 00:15:47.279 "auth": { 00:15:47.279 "dhgroup": "ffdhe8192", 00:15:47.279 "digest": "sha512", 00:15:47.279 "state": "completed" 00:15:47.279 }, 00:15:47.279 "cntlid": 145, 00:15:47.279 "listen_address": { 00:15:47.279 "adrfam": "IPv4", 00:15:47.279 "traddr": "10.0.0.2", 00:15:47.279 "trsvcid": "4420", 00:15:47.279 "trtype": "TCP" 00:15:47.279 }, 00:15:47.279 "peer_address": { 00:15:47.279 "adrfam": "IPv4", 00:15:47.279 "traddr": "10.0.0.1", 00:15:47.279 "trsvcid": "60062", 00:15:47.279 "trtype": "TCP" 00:15:47.279 }, 00:15:47.279 "qid": 0, 00:15:47.279 "state": "enabled", 00:15:47.279 "thread": "nvmf_tgt_poll_group_000" 00:15:47.279 } 00:15:47.279 ]' 00:15:47.279 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.280 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.280 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.280 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.280 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.538 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.538 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.538 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.538 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:00:OTkyZmIwMTJhNTFlMWQ1MzEwNGQ2ZGZkZjU5OGFhNGJlMzM3NDc5ZmU0ZjMwZmQ3SDDL5w==: --dhchap-ctrl-secret DHHC-1:03:ZjcyNjI2MTAzYzdhMDNlYTk3ZDU3MGM5ODRiOWNiOWQwMjZjMDVhMWJhNjZmZDNhYWQ4NzY4YjI2MzhkYTYyOcp5Q4k=: 00:15:48.494 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:48.495 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:48.755 2024/07/25 07:30:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:48.755 request: 00:15:48.755 { 00:15:48.755 "method": "bdev_nvme_attach_controller", 00:15:48.755 "params": { 00:15:48.755 "name": "nvme0", 00:15:48.755 "trtype": "tcp", 00:15:48.755 "traddr": "10.0.0.2", 00:15:48.755 "adrfam": "ipv4", 00:15:48.755 "trsvcid": "4420", 00:15:48.755 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:48.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b", 00:15:48.756 "prchk_reftag": false, 00:15:48.756 "prchk_guard": false, 00:15:48.756 "hdgst": false, 00:15:48.756 "ddgst": false, 00:15:48.756 "dhchap_key": "key2" 00:15:48.756 } 00:15:48.756 } 00:15:48.756 Got JSON-RPC error response 00:15:48.756 GoRPCClient: error on JSON-RPC call 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:48.756 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:49.325 2024/07/25 07:30:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:49.325 request: 00:15:49.325 { 00:15:49.325 "method": "bdev_nvme_attach_controller", 00:15:49.325 "params": { 00:15:49.325 "name": "nvme0", 00:15:49.326 "trtype": "tcp", 00:15:49.326 "traddr": "10.0.0.2", 00:15:49.326 "adrfam": "ipv4", 00:15:49.326 "trsvcid": "4420", 00:15:49.326 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:49.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b", 00:15:49.326 "prchk_reftag": false, 00:15:49.326 "prchk_guard": false, 00:15:49.326 "hdgst": false, 00:15:49.326 "ddgst": false, 00:15:49.326 "dhchap_key": "key1", 00:15:49.326 "dhchap_ctrlr_key": "ckey2" 00:15:49.326 } 00:15:49.326 } 00:15:49.326 Got JSON-RPC error response 00:15:49.326 GoRPCClient: error on JSON-RPC call 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key1 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.326 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.895 2024/07/25 07:30:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:49.895 request: 00:15:49.895 { 00:15:49.895 "method": "bdev_nvme_attach_controller", 00:15:49.895 "params": { 00:15:49.895 "name": "nvme0", 00:15:49.895 "trtype": "tcp", 00:15:49.895 "traddr": "10.0.0.2", 00:15:49.895 "adrfam": "ipv4", 00:15:49.895 "trsvcid": "4420", 00:15:49.895 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:49.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b", 00:15:49.895 "prchk_reftag": false, 00:15:49.895 "prchk_guard": false, 00:15:49.895 "hdgst": false, 00:15:49.895 "ddgst": false, 00:15:49.895 "dhchap_key": "key1", 00:15:49.895 "dhchap_ctrlr_key": "ckey1" 00:15:49.895 } 00:15:49.895 } 00:15:49.895 Got JSON-RPC error response 00:15:49.895 GoRPCClient: error on JSON-RPC call 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 78171 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78171 ']' 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78171 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.895 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78171 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.154 killing process with pid 78171 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78171' 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78171 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78171 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82972 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82972 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82972 ']' 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.154 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.090 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.090 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:51.090 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.090 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.090 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82972 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82972 ']' 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.349 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.349 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.349 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:51.349 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:51.349 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.349 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.607 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.172 00:15:52.172 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.172 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.172 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.430 { 00:15:52.430 "auth": { 00:15:52.430 "dhgroup": "ffdhe8192", 00:15:52.430 "digest": "sha512", 00:15:52.430 "state": "completed" 00:15:52.430 }, 00:15:52.430 "cntlid": 1, 00:15:52.430 "listen_address": { 00:15:52.430 "adrfam": "IPv4", 00:15:52.430 "traddr": "10.0.0.2", 00:15:52.430 "trsvcid": "4420", 00:15:52.430 "trtype": "TCP" 00:15:52.430 }, 00:15:52.430 "peer_address": { 00:15:52.430 "adrfam": "IPv4", 00:15:52.430 "traddr": "10.0.0.1", 00:15:52.430 "trsvcid": "60126", 00:15:52.430 "trtype": "TCP" 00:15:52.430 }, 00:15:52.430 "qid": 0, 00:15:52.430 "state": "enabled", 00:15:52.430 "thread": "nvmf_tgt_poll_group_000" 00:15:52.430 } 00:15:52.430 ]' 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.430 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.689 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-secret DHHC-1:03:M2JlOTcwNjJmMGFhNmJkYzc5NmZkNzdhZjYzNmJmM2E2ODkwZjJkNDA3NWU5MjIzYzM4MGJmN2ViNjAwMmYwNI0nNrE=: 00:15:53.255 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.255 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:53.255 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.255 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.514 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.514 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --dhchap-key key3 00:15:53.514 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.514 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.514 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.773 2024/07/25 07:30:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:53.773 request: 00:15:53.773 { 00:15:53.773 "method": "bdev_nvme_attach_controller", 00:15:53.773 "params": { 00:15:53.773 "name": "nvme0", 00:15:53.773 "trtype": "tcp", 00:15:53.773 "traddr": "10.0.0.2", 00:15:53.773 "adrfam": "ipv4", 00:15:53.773 "trsvcid": "4420", 00:15:53.773 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:53.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b", 00:15:53.773 "prchk_reftag": false, 00:15:53.773 "prchk_guard": false, 00:15:53.773 "hdgst": false, 00:15:53.773 "ddgst": false, 00:15:53.773 "dhchap_key": "key3" 00:15:53.773 } 00:15:53.773 } 00:15:53.773 Got JSON-RPC error response 00:15:53.773 GoRPCClient: error on JSON-RPC call 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:53.773 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.032 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.291 2024/07/25 07:30:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:54.291 request: 00:15:54.291 { 00:15:54.291 "method": "bdev_nvme_attach_controller", 00:15:54.291 "params": { 00:15:54.291 "name": "nvme0", 00:15:54.291 "trtype": "tcp", 00:15:54.291 "traddr": "10.0.0.2", 00:15:54.291 "adrfam": "ipv4", 00:15:54.291 "trsvcid": "4420", 00:15:54.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:54.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b", 00:15:54.291 "prchk_reftag": false, 00:15:54.291 "prchk_guard": false, 00:15:54.291 "hdgst": false, 00:15:54.291 "ddgst": false, 00:15:54.291 "dhchap_key": "key3" 00:15:54.291 } 00:15:54.291 } 00:15:54.291 Got JSON-RPC error response 00:15:54.291 GoRPCClient: error on JSON-RPC call 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:54.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:54.558 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:54.831 2024/07/25 07:30:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:54.831 request: 00:15:54.831 { 00:15:54.831 "method": "bdev_nvme_attach_controller", 00:15:54.831 "params": { 00:15:54.831 "name": "nvme0", 00:15:54.831 "trtype": "tcp", 00:15:54.831 "traddr": "10.0.0.2", 00:15:54.831 "adrfam": "ipv4", 00:15:54.831 "trsvcid": "4420", 00:15:54.831 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:54.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b", 00:15:54.831 "prchk_reftag": false, 00:15:54.831 "prchk_guard": false, 00:15:54.831 "hdgst": false, 00:15:54.831 "ddgst": false, 00:15:54.831 "dhchap_key": "key0", 00:15:54.831 "dhchap_ctrlr_key": "key1" 00:15:54.831 } 00:15:54.831 } 00:15:54.831 Got JSON-RPC error response 00:15:54.831 GoRPCClient: error on JSON-RPC call 00:15:54.831 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:54.831 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:54.831 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:54.831 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:54.831 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:54.831 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:55.090 00:15:55.090 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:15:55.090 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:15:55.090 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.348 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.348 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.348 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.606 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:15:55.606 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:15:55.606 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78215 00:15:55.606 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78215 ']' 00:15:55.606 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78215 00:15:55.606 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:55.607 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.607 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78215 00:15:55.607 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:55.607 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:55.607 killing process with pid 78215 00:15:55.607 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78215' 00:15:55.607 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78215 00:15:55.607 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78215 00:15:55.865 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:55.865 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.865 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.123 rmmod nvme_tcp 00:15:56.123 rmmod nvme_fabrics 00:15:56.123 rmmod nvme_keyring 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82972 ']' 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82972 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82972 ']' 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82972 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82972 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82972' 00:15:56.123 killing process with pid 82972 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82972 00:15:56.123 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82972 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xGC /tmp/spdk.key-sha256.aD9 /tmp/spdk.key-sha384.hwt /tmp/spdk.key-sha512.P9f /tmp/spdk.key-sha512.sxZ /tmp/spdk.key-sha384.5TI /tmp/spdk.key-sha256.mtl '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:56.382 00:15:56.382 real 2m42.249s 00:15:56.382 user 6m32.883s 00:15:56.382 sys 0m20.129s 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.382 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.382 ************************************ 00:15:56.382 END TEST nvmf_auth_target 00:15:56.382 ************************************ 00:15:56.382 07:30:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:56.382 07:30:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:56.382 07:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:56.382 07:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.382 07:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.382 ************************************ 00:15:56.382 START TEST nvmf_bdevio_no_huge 00:15:56.382 ************************************ 00:15:56.382 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:56.642 * Looking for test storage... 00:15:56.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.642 Cannot find device "nvmf_tgt_br" 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.642 Cannot find device "nvmf_tgt_br2" 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.642 Cannot find device "nvmf_tgt_br" 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.642 Cannot find device "nvmf_tgt_br2" 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.642 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.900 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:56.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:56.901 00:15:56.901 --- 10.0.0.2 ping statistics --- 00:15:56.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.901 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:56.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:56.901 00:15:56.901 --- 10.0.0.3 ping statistics --- 00:15:56.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.901 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:56.901 00:15:56.901 --- 10.0.0.1 ping statistics --- 00:15:56.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.901 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83363 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83363 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83363 ']' 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.901 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.901 [2024-07-25 07:30:29.629367] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:15:56.901 [2024-07-25 07:30:29.629450] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:57.159 [2024-07-25 07:30:29.762695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.159 [2024-07-25 07:30:29.874573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.159 [2024-07-25 07:30:29.874621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.159 [2024-07-25 07:30:29.874628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.159 [2024-07-25 07:30:29.874634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.159 [2024-07-25 07:30:29.874638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.159 [2024-07-25 07:30:29.874817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:57.159 [2024-07-25 07:30:29.874999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:57.159 [2024-07-25 07:30:29.875075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:57.159 [2024-07-25 07:30:29.875087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 [2024-07-25 07:30:30.582411] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 Malloc0 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 [2024-07-25 07:30:30.619786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:58.092 { 00:15:58.092 "params": { 00:15:58.092 "name": "Nvme$subsystem", 00:15:58.092 "trtype": "$TEST_TRANSPORT", 00:15:58.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.092 "adrfam": "ipv4", 00:15:58.092 "trsvcid": "$NVMF_PORT", 00:15:58.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.092 "hdgst": ${hdgst:-false}, 00:15:58.092 "ddgst": ${ddgst:-false} 00:15:58.092 }, 00:15:58.092 "method": "bdev_nvme_attach_controller" 00:15:58.092 } 00:15:58.092 EOF 00:15:58.092 )") 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:58.092 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:58.092 "params": { 00:15:58.092 "name": "Nvme1", 00:15:58.092 "trtype": "tcp", 00:15:58.092 "traddr": "10.0.0.2", 00:15:58.092 "adrfam": "ipv4", 00:15:58.092 "trsvcid": "4420", 00:15:58.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.092 "hdgst": false, 00:15:58.092 "ddgst": false 00:15:58.092 }, 00:15:58.092 "method": "bdev_nvme_attach_controller" 00:15:58.092 }' 00:15:58.092 [2024-07-25 07:30:30.676082] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:15:58.092 [2024-07-25 07:30:30.676177] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83421 ] 00:15:58.092 [2024-07-25 07:30:30.806521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:58.350 [2024-07-25 07:30:30.974486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.350 [2024-07-25 07:30:30.974541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.350 [2024-07-25 07:30:30.974545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.607 I/O targets: 00:15:58.607 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:58.607 00:15:58.607 00:15:58.607 CUnit - A unit testing framework for C - Version 2.1-3 00:15:58.607 http://cunit.sourceforge.net/ 00:15:58.607 00:15:58.607 00:15:58.607 Suite: bdevio tests on: Nvme1n1 00:15:58.607 Test: blockdev write read block ...passed 00:15:58.607 Test: blockdev write zeroes read block ...passed 00:15:58.607 Test: blockdev write zeroes read no split ...passed 00:15:58.607 Test: blockdev write zeroes read split ...passed 00:15:58.607 Test: blockdev write zeroes read split partial ...passed 00:15:58.607 Test: blockdev reset ...[2024-07-25 07:30:31.279887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:58.607 [2024-07-25 07:30:31.280025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1551460 (9): Bad file descriptor 00:15:58.607 [2024-07-25 07:30:31.291943] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:58.607 passed 00:15:58.607 Test: blockdev write read 8 blocks ...passed 00:15:58.607 Test: blockdev write read size > 128k ...passed 00:15:58.607 Test: blockdev write read invalid size ...passed 00:15:58.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.607 Test: blockdev write read max offset ...passed 00:15:58.865 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.865 Test: blockdev writev readv 8 blocks ...passed 00:15:58.865 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.865 Test: blockdev writev readv block ...passed 00:15:58.865 Test: blockdev writev readv size > 128k ...passed 00:15:58.865 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.865 Test: blockdev comparev and writev ...[2024-07-25 07:30:31.463756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.463811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.463827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.463834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.464110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.464145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.464159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.464166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.464452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.464477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.464490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.464497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.464755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.464777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.464790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.865 [2024-07-25 07:30:31.464797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:58.865 passed 00:15:58.865 Test: blockdev nvme passthru rw ...passed 00:15:58.865 Test: blockdev nvme passthru vendor specific ...[2024-07-25 07:30:31.546519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.865 [2024-07-25 07:30:31.546601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.546746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.865 [2024-07-25 07:30:31.546766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.546870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.865 [2024-07-25 07:30:31.546889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:58.865 [2024-07-25 07:30:31.546983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.865 [2024-07-25 07:30:31.547000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:58.865 passed 00:15:58.865 Test: blockdev nvme admin passthru ...passed 00:15:59.123 Test: blockdev copy ...passed 00:15:59.123 00:15:59.123 Run Summary: Type Total Ran Passed Failed Inactive 00:15:59.123 suites 1 1 n/a 0 0 00:15:59.123 tests 23 23 23 0 0 00:15:59.123 asserts 152 152 152 0 n/a 00:15:59.123 00:15:59.123 Elapsed time = 0.926 seconds 00:15:59.380 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.381 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.381 rmmod nvme_tcp 00:15:59.381 rmmod nvme_fabrics 00:15:59.381 rmmod nvme_keyring 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83363 ']' 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83363 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83363 ']' 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83363 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83363 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:59.381 killing process with pid 83363 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83363' 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83363 00:15:59.381 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83363 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:59.946 00:15:59.946 real 0m3.434s 00:15:59.946 user 0m12.056s 00:15:59.946 sys 0m1.221s 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:59.946 ************************************ 00:15:59.946 END TEST nvmf_bdevio_no_huge 00:15:59.946 ************************************ 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.946 ************************************ 00:15:59.946 START TEST nvmf_tls 00:15:59.946 ************************************ 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:59.946 * Looking for test storage... 00:15:59.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.946 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:59.947 Cannot find device "nvmf_tgt_br" 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:15:59.947 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.205 Cannot find device "nvmf_tgt_br2" 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:00.205 Cannot find device "nvmf_tgt_br" 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:00.205 Cannot find device "nvmf_tgt_br2" 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.205 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:00.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:16:00.206 00:16:00.206 --- 10.0.0.2 ping statistics --- 00:16:00.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.206 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:00.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:00.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:00.206 00:16:00.206 --- 10.0.0.3 ping statistics --- 00:16:00.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.206 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:00.206 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:00.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:00.464 00:16:00.464 --- 10.0.0.1 ping statistics --- 00:16:00.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.464 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83610 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83610 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83610 ']' 00:16:00.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.464 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.464 [2024-07-25 07:30:33.024824] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:00.464 [2024-07-25 07:30:33.024909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.464 [2024-07-25 07:30:33.152652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.725 [2024-07-25 07:30:33.271562] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.725 [2024-07-25 07:30:33.271621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.725 [2024-07-25 07:30:33.271630] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.725 [2024-07-25 07:30:33.271636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.725 [2024-07-25 07:30:33.271640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.725 [2024-07-25 07:30:33.271664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:01.295 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:01.554 true 00:16:01.554 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:01.554 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:01.813 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:01.813 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:01.813 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:02.072 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:02.072 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:02.330 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:02.330 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:02.330 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:02.896 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:02.896 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:02.896 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:02.896 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:02.896 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:02.896 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:03.158 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:03.158 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:03.158 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:03.421 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:03.421 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:03.679 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:03.679 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:03.679 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:03.937 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:03.937 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.WptzWTj3uH 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Ed8lTuHUll 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.WptzWTj3uH 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Ed8lTuHUll 00:16:04.196 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:04.455 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:04.713 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.WptzWTj3uH 00:16:04.713 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WptzWTj3uH 00:16:04.713 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:04.971 [2024-07-25 07:30:37.612535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.971 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:05.230 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:05.488 [2024-07-25 07:30:38.107695] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:05.488 [2024-07-25 07:30:38.107894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.488 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:05.744 malloc0 00:16:05.744 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:06.002 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WptzWTj3uH 00:16:06.261 [2024-07-25 07:30:38.835600] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:06.261 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WptzWTj3uH 00:16:18.468 Initializing NVMe Controllers 00:16:18.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:18.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:18.468 Initialization complete. Launching workers. 00:16:18.468 ======================================================== 00:16:18.468 Latency(us) 00:16:18.468 Device Information : IOPS MiB/s Average min max 00:16:18.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12859.50 50.23 4977.51 1104.91 12978.05 00:16:18.468 ======================================================== 00:16:18.468 Total : 12859.50 50.23 4977.51 1104.91 12978.05 00:16:18.468 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WptzWTj3uH 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WptzWTj3uH' 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83962 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83962 /var/tmp/bdevperf.sock 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83962 ']' 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.468 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.468 [2024-07-25 07:30:49.097728] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:18.468 [2024-07-25 07:30:49.097815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83962 ] 00:16:18.468 [2024-07-25 07:30:49.236343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.468 [2024-07-25 07:30:49.344043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.468 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.468 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:18.468 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WptzWTj3uH 00:16:18.468 [2024-07-25 07:30:50.294448] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.468 [2024-07-25 07:30:50.294557] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:18.468 TLSTESTn1 00:16:18.468 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:18.468 Running I/O for 10 seconds... 00:16:28.486 00:16:28.486 Latency(us) 00:16:28.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.486 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:28.486 Verification LBA range: start 0x0 length 0x2000 00:16:28.486 TLSTESTn1 : 10.01 5615.98 21.94 0.00 0.00 22752.68 4865.12 21406.52 00:16:28.486 =================================================================================================================== 00:16:28.486 Total : 5615.98 21.94 0.00 0.00 22752.68 4865.12 21406.52 00:16:28.486 0 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83962 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83962 ']' 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83962 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83962 00:16:28.487 killing process with pid 83962 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83962' 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83962 00:16:28.487 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.487 00:16:28.487 Latency(us) 00:16:28.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.487 =================================================================================================================== 00:16:28.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.487 [2024-07-25 07:31:00.557644] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83962 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ed8lTuHUll 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ed8lTuHUll 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ed8lTuHUll 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ed8lTuHUll' 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84114 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84114 /var/tmp/bdevperf.sock 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84114 ']' 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.487 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.487 [2024-07-25 07:31:00.801066] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:28.487 [2024-07-25 07:31:00.801159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84114 ] 00:16:28.487 [2024-07-25 07:31:00.940781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.487 [2024-07-25 07:31:01.048640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.055 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.055 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:29.055 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ed8lTuHUll 00:16:29.315 [2024-07-25 07:31:01.879553] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.315 [2024-07-25 07:31:01.879658] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:29.315 [2024-07-25 07:31:01.887329] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:29.315 [2024-07-25 07:31:01.888078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144aca0 (107): Transport endpoint is not connected 00:16:29.315 [2024-07-25 07:31:01.889065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144aca0 (9): Bad file descriptor 00:16:29.315 [2024-07-25 07:31:01.890059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:29.315 [2024-07-25 07:31:01.890080] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:29.315 [2024-07-25 07:31:01.890090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:29.315 2024/07/25 07:31:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Ed8lTuHUll subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:29.315 request: 00:16:29.315 { 00:16:29.315 "method": "bdev_nvme_attach_controller", 00:16:29.315 "params": { 00:16:29.315 "name": "TLSTEST", 00:16:29.315 "trtype": "tcp", 00:16:29.315 "traddr": "10.0.0.2", 00:16:29.315 "adrfam": "ipv4", 00:16:29.315 "trsvcid": "4420", 00:16:29.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.315 "prchk_reftag": false, 00:16:29.315 "prchk_guard": false, 00:16:29.315 "hdgst": false, 00:16:29.315 "ddgst": false, 00:16:29.315 "psk": "/tmp/tmp.Ed8lTuHUll" 00:16:29.315 } 00:16:29.315 } 00:16:29.315 Got JSON-RPC error response 00:16:29.315 GoRPCClient: error on JSON-RPC call 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 84114 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84114 ']' 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84114 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84114 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:29.315 killing process with pid 84114 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84114' 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84114 00:16:29.315 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.315 00:16:29.315 Latency(us) 00:16:29.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.315 =================================================================================================================== 00:16:29.315 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:29.315 [2024-07-25 07:31:01.938308] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:29.315 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84114 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WptzWTj3uH 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WptzWTj3uH 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WptzWTj3uH 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WptzWTj3uH' 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84158 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84158 /var/tmp/bdevperf.sock 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84158 ']' 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.575 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.575 [2024-07-25 07:31:02.185109] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:29.575 [2024-07-25 07:31:02.185238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84158 ] 00:16:29.850 [2024-07-25 07:31:02.313217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.850 [2024-07-25 07:31:02.418380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.420 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.420 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:30.420 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.WptzWTj3uH 00:16:30.679 [2024-07-25 07:31:03.279722] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:30.680 [2024-07-25 07:31:03.279810] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:30.680 [2024-07-25 07:31:03.284569] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:30.680 [2024-07-25 07:31:03.284605] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:30.680 [2024-07-25 07:31:03.284650] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:30.680 [2024-07-25 07:31:03.285165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aaca0 (107): Transport endpoint is not connected 00:16:30.680 [2024-07-25 07:31:03.286151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aaca0 (9): Bad file descriptor 00:16:30.680 [2024-07-25 07:31:03.287147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:30.680 [2024-07-25 07:31:03.287169] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:30.680 [2024-07-25 07:31:03.287180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:30.680 2024/07/25 07:31:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.WptzWTj3uH subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:30.680 request: 00:16:30.680 { 00:16:30.680 "method": "bdev_nvme_attach_controller", 00:16:30.680 "params": { 00:16:30.680 "name": "TLSTEST", 00:16:30.680 "trtype": "tcp", 00:16:30.680 "traddr": "10.0.0.2", 00:16:30.680 "adrfam": "ipv4", 00:16:30.680 "trsvcid": "4420", 00:16:30.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.680 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:30.680 "prchk_reftag": false, 00:16:30.680 "prchk_guard": false, 00:16:30.680 "hdgst": false, 00:16:30.680 "ddgst": false, 00:16:30.680 "psk": "/tmp/tmp.WptzWTj3uH" 00:16:30.680 } 00:16:30.680 } 00:16:30.680 Got JSON-RPC error response 00:16:30.680 GoRPCClient: error on JSON-RPC call 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 84158 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84158 ']' 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84158 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84158 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:30.680 killing process with pid 84158 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84158' 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84158 00:16:30.680 Received shutdown signal, test time was about 10.000000 seconds 00:16:30.680 00:16:30.680 Latency(us) 00:16:30.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.680 =================================================================================================================== 00:16:30.680 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:30.680 [2024-07-25 07:31:03.344845] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:30.680 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84158 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WptzWTj3uH 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WptzWTj3uH 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WptzWTj3uH 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WptzWTj3uH' 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84201 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84201 /var/tmp/bdevperf.sock 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84201 ']' 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.939 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.939 [2024-07-25 07:31:03.578342] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:30.939 [2024-07-25 07:31:03.578419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84201 ] 00:16:31.198 [2024-07-25 07:31:03.720381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.198 [2024-07-25 07:31:03.824539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.767 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.767 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:31.767 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WptzWTj3uH 00:16:32.027 [2024-07-25 07:31:04.648467] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:32.027 [2024-07-25 07:31:04.648570] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:32.027 [2024-07-25 07:31:04.653160] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:32.027 [2024-07-25 07:31:04.653199] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:32.027 [2024-07-25 07:31:04.653249] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:32.027 [2024-07-25 07:31:04.653887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8acca0 (107): Transport endpoint is not connected 00:16:32.027 [2024-07-25 07:31:04.654874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8acca0 (9): Bad file descriptor 00:16:32.027 [2024-07-25 07:31:04.655868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:32.027 [2024-07-25 07:31:04.655890] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:32.027 [2024-07-25 07:31:04.655900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:32.027 2024/07/25 07:31:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.WptzWTj3uH subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:32.027 request: 00:16:32.027 { 00:16:32.027 "method": "bdev_nvme_attach_controller", 00:16:32.027 "params": { 00:16:32.027 "name": "TLSTEST", 00:16:32.027 "trtype": "tcp", 00:16:32.027 "traddr": "10.0.0.2", 00:16:32.027 "adrfam": "ipv4", 00:16:32.027 "trsvcid": "4420", 00:16:32.027 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:32.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.027 "prchk_reftag": false, 00:16:32.027 "prchk_guard": false, 00:16:32.027 "hdgst": false, 00:16:32.027 "ddgst": false, 00:16:32.027 "psk": "/tmp/tmp.WptzWTj3uH" 00:16:32.027 } 00:16:32.027 } 00:16:32.027 Got JSON-RPC error response 00:16:32.027 GoRPCClient: error on JSON-RPC call 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 84201 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84201 ']' 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84201 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84201 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:32.027 killing process with pid 84201 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84201' 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84201 00:16:32.027 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.027 00:16:32.027 Latency(us) 00:16:32.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.027 =================================================================================================================== 00:16:32.027 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.027 [2024-07-25 07:31:04.713623] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:32.027 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84201 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84247 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84247 /var/tmp/bdevperf.sock 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84247 ']' 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.286 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.286 [2024-07-25 07:31:04.957171] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:32.286 [2024-07-25 07:31:04.957252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84247 ] 00:16:32.546 [2024-07-25 07:31:05.097271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.546 [2024-07-25 07:31:05.199587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.481 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.481 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:33.481 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:33.481 [2024-07-25 07:31:06.049718] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:33.481 [2024-07-25 07:31:06.051521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa12240 (9): Bad file descriptor 00:16:33.481 [2024-07-25 07:31:06.052512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:33.481 [2024-07-25 07:31:06.052536] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:33.481 [2024-07-25 07:31:06.052547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:33.481 2024/07/25 07:31:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:33.481 request: 00:16:33.481 { 00:16:33.481 "method": "bdev_nvme_attach_controller", 00:16:33.481 "params": { 00:16:33.481 "name": "TLSTEST", 00:16:33.481 "trtype": "tcp", 00:16:33.481 "traddr": "10.0.0.2", 00:16:33.481 "adrfam": "ipv4", 00:16:33.481 "trsvcid": "4420", 00:16:33.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.481 "prchk_reftag": false, 00:16:33.481 "prchk_guard": false, 00:16:33.481 "hdgst": false, 00:16:33.481 "ddgst": false 00:16:33.481 } 00:16:33.481 } 00:16:33.481 Got JSON-RPC error response 00:16:33.481 GoRPCClient: error on JSON-RPC call 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 84247 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84247 ']' 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84247 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84247 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:33.481 killing process with pid 84247 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84247' 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84247 00:16:33.481 Received shutdown signal, test time was about 10.000000 seconds 00:16:33.481 00:16:33.481 Latency(us) 00:16:33.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.481 =================================================================================================================== 00:16:33.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.481 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84247 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 83610 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83610 ']' 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83610 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83610 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:33.739 killing process with pid 83610 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83610' 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83610 00:16:33.739 [2024-07-25 07:31:06.350962] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:33.739 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83610 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.AUDfLvpYTE 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.AUDfLvpYTE 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84298 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84298 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84298 ']' 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.998 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.998 [2024-07-25 07:31:06.690553] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:33.998 [2024-07-25 07:31:06.690641] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.257 [2024-07-25 07:31:06.831373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.257 [2024-07-25 07:31:06.935612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.257 [2024-07-25 07:31:06.935666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.257 [2024-07-25 07:31:06.935673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.257 [2024-07-25 07:31:06.935679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.257 [2024-07-25 07:31:06.935683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.257 [2024-07-25 07:31:06.935706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.AUDfLvpYTE 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AUDfLvpYTE 00:16:35.193 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:35.193 [2024-07-25 07:31:07.889770] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.194 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:35.451 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:35.709 [2024-07-25 07:31:08.340993] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:35.709 [2024-07-25 07:31:08.341190] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.709 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:35.967 malloc0 00:16:35.967 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:36.229 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AUDfLvpYTE 00:16:36.498 [2024-07-25 07:31:08.964636] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:36.498 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AUDfLvpYTE 00:16:36.498 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AUDfLvpYTE' 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84405 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84405 /var/tmp/bdevperf.sock 00:16:36.499 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84405 ']' 00:16:36.499 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.499 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.499 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.499 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.499 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.499 [2024-07-25 07:31:09.037062] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:36.499 [2024-07-25 07:31:09.037175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84405 ] 00:16:36.499 [2024-07-25 07:31:09.160016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.758 [2024-07-25 07:31:09.260744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.327 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.327 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:37.327 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AUDfLvpYTE 00:16:37.586 [2024-07-25 07:31:10.135852] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.586 [2024-07-25 07:31:10.136330] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:37.586 TLSTESTn1 00:16:37.586 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:37.846 Running I/O for 10 seconds... 00:16:47.829 00:16:47.829 Latency(us) 00:16:47.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.829 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:47.829 Verification LBA range: start 0x0 length 0x2000 00:16:47.829 TLSTESTn1 : 10.01 5135.96 20.06 0.00 0.00 24876.95 5866.76 21749.94 00:16:47.829 =================================================================================================================== 00:16:47.829 Total : 5135.96 20.06 0.00 0.00 24876.95 5866.76 21749.94 00:16:47.829 0 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 84405 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84405 ']' 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84405 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84405 00:16:47.829 killing process with pid 84405 00:16:47.829 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.829 00:16:47.829 Latency(us) 00:16:47.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.829 =================================================================================================================== 00:16:47.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84405' 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84405 00:16:47.829 [2024-07-25 07:31:20.357083] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:47.829 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84405 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.AUDfLvpYTE 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AUDfLvpYTE 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AUDfLvpYTE 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:48.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AUDfLvpYTE 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AUDfLvpYTE' 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84553 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84553 /var/tmp/bdevperf.sock 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84553 ']' 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.087 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.087 [2024-07-25 07:31:20.625435] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:48.087 [2024-07-25 07:31:20.625541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84553 ] 00:16:48.087 [2024-07-25 07:31:20.752826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.345 [2024-07-25 07:31:20.874690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.909 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.909 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:48.909 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AUDfLvpYTE 00:16:49.167 [2024-07-25 07:31:21.836175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:49.167 [2024-07-25 07:31:21.836739] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:49.167 [2024-07-25 07:31:21.836859] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.AUDfLvpYTE 00:16:49.167 2024/07/25 07:31:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.AUDfLvpYTE subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:16:49.167 request: 00:16:49.167 { 00:16:49.167 "method": "bdev_nvme_attach_controller", 00:16:49.167 "params": { 00:16:49.167 "name": "TLSTEST", 00:16:49.167 "trtype": "tcp", 00:16:49.167 "traddr": "10.0.0.2", 00:16:49.167 "adrfam": "ipv4", 00:16:49.167 "trsvcid": "4420", 00:16:49.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:49.167 "prchk_reftag": false, 00:16:49.167 "prchk_guard": false, 00:16:49.167 "hdgst": false, 00:16:49.167 "ddgst": false, 00:16:49.167 "psk": "/tmp/tmp.AUDfLvpYTE" 00:16:49.167 } 00:16:49.167 } 00:16:49.167 Got JSON-RPC error response 00:16:49.167 GoRPCClient: error on JSON-RPC call 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 84553 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84553 ']' 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84553 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84553 00:16:49.167 killing process with pid 84553 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84553' 00:16:49.167 Received shutdown signal, test time was about 10.000000 seconds 00:16:49.167 00:16:49.167 Latency(us) 00:16:49.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.167 =================================================================================================================== 00:16:49.167 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84553 00:16:49.167 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84553 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 84298 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84298 ']' 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84298 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84298 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:49.428 killing process with pid 84298 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84298' 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84298 00:16:49.428 [2024-07-25 07:31:22.130880] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:49.428 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84298 00:16:49.685 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:49.685 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84604 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84604 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84604 ']' 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.686 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.686 [2024-07-25 07:31:22.412750] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:49.686 [2024-07-25 07:31:22.412851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.943 [2024-07-25 07:31:22.538093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.943 [2024-07-25 07:31:22.654181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.943 [2024-07-25 07:31:22.654236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.943 [2024-07-25 07:31:22.654244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.943 [2024-07-25 07:31:22.654249] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.943 [2024-07-25 07:31:22.654254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.943 [2024-07-25 07:31:22.654280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.AUDfLvpYTE 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.AUDfLvpYTE 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.AUDfLvpYTE 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AUDfLvpYTE 00:16:50.875 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:51.132 [2024-07-25 07:31:23.613771] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.132 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:51.389 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:51.646 [2024-07-25 07:31:24.188889] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.646 [2024-07-25 07:31:24.189108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.646 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:51.904 malloc0 00:16:51.904 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AUDfLvpYTE 00:16:52.163 [2024-07-25 07:31:24.864940] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:52.163 [2024-07-25 07:31:24.864993] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:52.163 [2024-07-25 07:31:24.865027] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:52.163 2024/07/25 07:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.AUDfLvpYTE], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:52.163 request: 00:16:52.163 { 00:16:52.163 "method": "nvmf_subsystem_add_host", 00:16:52.163 "params": { 00:16:52.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.163 "host": "nqn.2016-06.io.spdk:host1", 00:16:52.163 "psk": "/tmp/tmp.AUDfLvpYTE" 00:16:52.163 } 00:16:52.163 } 00:16:52.163 Got JSON-RPC error response 00:16:52.163 GoRPCClient: error on JSON-RPC call 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 84604 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84604 ']' 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84604 00:16:52.163 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:52.421 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:52.421 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84604 00:16:52.421 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:52.421 killing process with pid 84604 00:16:52.421 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:52.421 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84604' 00:16:52.421 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84604 00:16:52.421 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84604 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.AUDfLvpYTE 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84719 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84719 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84719 ']' 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.421 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.422 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.422 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:52.422 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.422 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.680 [2024-07-25 07:31:25.194765] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:52.680 [2024-07-25 07:31:25.195457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.680 [2024-07-25 07:31:25.336452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.940 [2024-07-25 07:31:25.443806] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.940 [2024-07-25 07:31:25.443857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.940 [2024-07-25 07:31:25.443864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.940 [2024-07-25 07:31:25.443870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.940 [2024-07-25 07:31:25.443874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.940 [2024-07-25 07:31:25.443896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.AUDfLvpYTE 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AUDfLvpYTE 00:16:53.507 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:53.766 [2024-07-25 07:31:26.394311] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.766 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:54.025 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:54.284 [2024-07-25 07:31:26.873504] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:54.284 [2024-07-25 07:31:26.873709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.284 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:54.543 malloc0 00:16:54.543 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AUDfLvpYTE 00:16:54.802 [2024-07-25 07:31:27.505456] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84816 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84816 /var/tmp/bdevperf.sock 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84816 ']' 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.802 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.061 [2024-07-25 07:31:27.576697] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:55.061 [2024-07-25 07:31:27.576796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84816 ] 00:16:55.061 [2024-07-25 07:31:27.701941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.320 [2024-07-25 07:31:27.807730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.890 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.890 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:55.890 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AUDfLvpYTE 00:16:56.150 [2024-07-25 07:31:28.657050] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.150 [2024-07-25 07:31:28.657143] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:56.150 TLSTESTn1 00:16:56.150 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:56.410 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:56.410 "subsystems": [ 00:16:56.410 { 00:16:56.410 "subsystem": "keyring", 00:16:56.410 "config": [] 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "subsystem": "iobuf", 00:16:56.410 "config": [ 00:16:56.410 { 00:16:56.410 "method": "iobuf_set_options", 00:16:56.410 "params": { 00:16:56.410 "large_bufsize": 135168, 00:16:56.410 "large_pool_count": 1024, 00:16:56.410 "small_bufsize": 8192, 00:16:56.410 "small_pool_count": 8192 00:16:56.410 } 00:16:56.410 } 00:16:56.410 ] 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "subsystem": "sock", 00:16:56.410 "config": [ 00:16:56.410 { 00:16:56.410 "method": "sock_set_default_impl", 00:16:56.410 "params": { 00:16:56.410 "impl_name": "posix" 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "sock_impl_set_options", 00:16:56.410 "params": { 00:16:56.410 "enable_ktls": false, 00:16:56.410 "enable_placement_id": 0, 00:16:56.410 "enable_quickack": false, 00:16:56.410 "enable_recv_pipe": true, 00:16:56.410 "enable_zerocopy_send_client": false, 00:16:56.410 "enable_zerocopy_send_server": true, 00:16:56.410 "impl_name": "ssl", 00:16:56.410 "recv_buf_size": 4096, 00:16:56.410 "send_buf_size": 4096, 00:16:56.410 "tls_version": 0, 00:16:56.410 "zerocopy_threshold": 0 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "sock_impl_set_options", 00:16:56.410 "params": { 00:16:56.410 "enable_ktls": false, 00:16:56.410 "enable_placement_id": 0, 00:16:56.410 "enable_quickack": false, 00:16:56.410 "enable_recv_pipe": true, 00:16:56.410 "enable_zerocopy_send_client": false, 00:16:56.410 "enable_zerocopy_send_server": true, 00:16:56.410 "impl_name": "posix", 00:16:56.410 "recv_buf_size": 2097152, 00:16:56.410 "send_buf_size": 2097152, 00:16:56.410 "tls_version": 0, 00:16:56.410 "zerocopy_threshold": 0 00:16:56.410 } 00:16:56.410 } 00:16:56.410 ] 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "subsystem": "vmd", 00:16:56.410 "config": [] 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "subsystem": "accel", 00:16:56.410 "config": [ 00:16:56.410 { 00:16:56.410 "method": "accel_set_options", 00:16:56.410 "params": { 00:16:56.410 "buf_count": 2048, 00:16:56.410 "large_cache_size": 16, 00:16:56.410 "sequence_count": 2048, 00:16:56.410 "small_cache_size": 128, 00:16:56.410 "task_count": 2048 00:16:56.410 } 00:16:56.410 } 00:16:56.410 ] 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "subsystem": "bdev", 00:16:56.410 "config": [ 00:16:56.410 { 00:16:56.410 "method": "bdev_set_options", 00:16:56.410 "params": { 00:16:56.410 "bdev_auto_examine": true, 00:16:56.410 "bdev_io_cache_size": 256, 00:16:56.410 "bdev_io_pool_size": 65535, 00:16:56.410 "iobuf_large_cache_size": 16, 00:16:56.410 "iobuf_small_cache_size": 128 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "bdev_raid_set_options", 00:16:56.410 "params": { 00:16:56.410 "process_max_bandwidth_mb_sec": 0, 00:16:56.410 "process_window_size_kb": 1024 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "bdev_iscsi_set_options", 00:16:56.410 "params": { 00:16:56.410 "timeout_sec": 30 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "bdev_nvme_set_options", 00:16:56.410 "params": { 00:16:56.410 "action_on_timeout": "none", 00:16:56.410 "allow_accel_sequence": false, 00:16:56.410 "arbitration_burst": 0, 00:16:56.410 "bdev_retry_count": 3, 00:16:56.410 "ctrlr_loss_timeout_sec": 0, 00:16:56.410 "delay_cmd_submit": true, 00:16:56.410 "dhchap_dhgroups": [ 00:16:56.410 "null", 00:16:56.410 "ffdhe2048", 00:16:56.410 "ffdhe3072", 00:16:56.410 "ffdhe4096", 00:16:56.410 "ffdhe6144", 00:16:56.410 "ffdhe8192" 00:16:56.410 ], 00:16:56.410 "dhchap_digests": [ 00:16:56.410 "sha256", 00:16:56.410 "sha384", 00:16:56.410 "sha512" 00:16:56.410 ], 00:16:56.410 "disable_auto_failback": false, 00:16:56.410 "fast_io_fail_timeout_sec": 0, 00:16:56.410 "generate_uuids": false, 00:16:56.410 "high_priority_weight": 0, 00:16:56.410 "io_path_stat": false, 00:16:56.410 "io_queue_requests": 0, 00:16:56.410 "keep_alive_timeout_ms": 10000, 00:16:56.410 "low_priority_weight": 0, 00:16:56.410 "medium_priority_weight": 0, 00:16:56.410 "nvme_adminq_poll_period_us": 10000, 00:16:56.410 "nvme_error_stat": false, 00:16:56.410 "nvme_ioq_poll_period_us": 0, 00:16:56.410 "rdma_cm_event_timeout_ms": 0, 00:16:56.410 "rdma_max_cq_size": 0, 00:16:56.410 "rdma_srq_size": 0, 00:16:56.410 "reconnect_delay_sec": 0, 00:16:56.410 "timeout_admin_us": 0, 00:16:56.410 "timeout_us": 0, 00:16:56.410 "transport_ack_timeout": 0, 00:16:56.410 "transport_retry_count": 4, 00:16:56.410 "transport_tos": 0 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "bdev_nvme_set_hotplug", 00:16:56.410 "params": { 00:16:56.410 "enable": false, 00:16:56.410 "period_us": 100000 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "bdev_malloc_create", 00:16:56.410 "params": { 00:16:56.410 "block_size": 4096, 00:16:56.410 "dif_is_head_of_md": false, 00:16:56.410 "dif_pi_format": 0, 00:16:56.410 "dif_type": 0, 00:16:56.410 "md_size": 0, 00:16:56.410 "name": "malloc0", 00:16:56.410 "num_blocks": 8192, 00:16:56.410 "optimal_io_boundary": 0, 00:16:56.410 "physical_block_size": 4096, 00:16:56.410 "uuid": "c00e177b-a64e-4408-8ba0-12f3d050056d" 00:16:56.410 } 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "method": "bdev_wait_for_examine" 00:16:56.410 } 00:16:56.410 ] 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "subsystem": "nbd", 00:16:56.410 "config": [] 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "subsystem": "scheduler", 00:16:56.410 "config": [ 00:16:56.410 { 00:16:56.410 "method": "framework_set_scheduler", 00:16:56.410 "params": { 00:16:56.410 "name": "static" 00:16:56.411 } 00:16:56.411 } 00:16:56.411 ] 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "subsystem": "nvmf", 00:16:56.411 "config": [ 00:16:56.411 { 00:16:56.411 "method": "nvmf_set_config", 00:16:56.411 "params": { 00:16:56.411 "admin_cmd_passthru": { 00:16:56.411 "identify_ctrlr": false 00:16:56.411 }, 00:16:56.411 "discovery_filter": "match_any" 00:16:56.411 } 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "method": "nvmf_set_max_subsystems", 00:16:56.411 "params": { 00:16:56.411 "max_subsystems": 1024 00:16:56.411 } 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "method": "nvmf_set_crdt", 00:16:56.411 "params": { 00:16:56.411 "crdt1": 0, 00:16:56.411 "crdt2": 0, 00:16:56.411 "crdt3": 0 00:16:56.411 } 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "method": "nvmf_create_transport", 00:16:56.411 "params": { 00:16:56.411 "abort_timeout_sec": 1, 00:16:56.411 "ack_timeout": 0, 00:16:56.411 "buf_cache_size": 4294967295, 00:16:56.411 "c2h_success": false, 00:16:56.411 "data_wr_pool_size": 0, 00:16:56.411 "dif_insert_or_strip": false, 00:16:56.411 "in_capsule_data_size": 4096, 00:16:56.411 "io_unit_size": 131072, 00:16:56.411 "max_aq_depth": 128, 00:16:56.411 "max_io_qpairs_per_ctrlr": 127, 00:16:56.411 "max_io_size": 131072, 00:16:56.411 "max_queue_depth": 128, 00:16:56.411 "num_shared_buffers": 511, 00:16:56.411 "sock_priority": 0, 00:16:56.411 "trtype": "TCP", 00:16:56.411 "zcopy": false 00:16:56.411 } 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "method": "nvmf_create_subsystem", 00:16:56.411 "params": { 00:16:56.411 "allow_any_host": false, 00:16:56.411 "ana_reporting": false, 00:16:56.411 "max_cntlid": 65519, 00:16:56.411 "max_namespaces": 10, 00:16:56.411 "min_cntlid": 1, 00:16:56.411 "model_number": "SPDK bdev Controller", 00:16:56.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.411 "serial_number": "SPDK00000000000001" 00:16:56.411 } 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "method": "nvmf_subsystem_add_host", 00:16:56.411 "params": { 00:16:56.411 "host": "nqn.2016-06.io.spdk:host1", 00:16:56.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.411 "psk": "/tmp/tmp.AUDfLvpYTE" 00:16:56.411 } 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "method": "nvmf_subsystem_add_ns", 00:16:56.411 "params": { 00:16:56.411 "namespace": { 00:16:56.411 "bdev_name": "malloc0", 00:16:56.411 "nguid": "C00E177BA64E44088BA012F3D050056D", 00:16:56.411 "no_auto_visible": false, 00:16:56.411 "nsid": 1, 00:16:56.411 "uuid": "c00e177b-a64e-4408-8ba0-12f3d050056d" 00:16:56.411 }, 00:16:56.411 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:56.411 } 00:16:56.411 }, 00:16:56.411 { 00:16:56.411 "method": "nvmf_subsystem_add_listener", 00:16:56.411 "params": { 00:16:56.411 "listen_address": { 00:16:56.411 "adrfam": "IPv4", 00:16:56.411 "traddr": "10.0.0.2", 00:16:56.411 "trsvcid": "4420", 00:16:56.411 "trtype": "TCP" 00:16:56.411 }, 00:16:56.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.411 "secure_channel": true 00:16:56.411 } 00:16:56.411 } 00:16:56.411 ] 00:16:56.411 } 00:16:56.411 ] 00:16:56.411 }' 00:16:56.411 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:56.670 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:56.670 "subsystems": [ 00:16:56.670 { 00:16:56.670 "subsystem": "keyring", 00:16:56.670 "config": [] 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "subsystem": "iobuf", 00:16:56.670 "config": [ 00:16:56.670 { 00:16:56.670 "method": "iobuf_set_options", 00:16:56.670 "params": { 00:16:56.670 "large_bufsize": 135168, 00:16:56.670 "large_pool_count": 1024, 00:16:56.670 "small_bufsize": 8192, 00:16:56.670 "small_pool_count": 8192 00:16:56.670 } 00:16:56.670 } 00:16:56.670 ] 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "subsystem": "sock", 00:16:56.670 "config": [ 00:16:56.670 { 00:16:56.670 "method": "sock_set_default_impl", 00:16:56.670 "params": { 00:16:56.670 "impl_name": "posix" 00:16:56.670 } 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "method": "sock_impl_set_options", 00:16:56.670 "params": { 00:16:56.670 "enable_ktls": false, 00:16:56.670 "enable_placement_id": 0, 00:16:56.670 "enable_quickack": false, 00:16:56.670 "enable_recv_pipe": true, 00:16:56.670 "enable_zerocopy_send_client": false, 00:16:56.670 "enable_zerocopy_send_server": true, 00:16:56.670 "impl_name": "ssl", 00:16:56.670 "recv_buf_size": 4096, 00:16:56.670 "send_buf_size": 4096, 00:16:56.670 "tls_version": 0, 00:16:56.670 "zerocopy_threshold": 0 00:16:56.670 } 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "method": "sock_impl_set_options", 00:16:56.670 "params": { 00:16:56.670 "enable_ktls": false, 00:16:56.670 "enable_placement_id": 0, 00:16:56.670 "enable_quickack": false, 00:16:56.670 "enable_recv_pipe": true, 00:16:56.670 "enable_zerocopy_send_client": false, 00:16:56.670 "enable_zerocopy_send_server": true, 00:16:56.670 "impl_name": "posix", 00:16:56.670 "recv_buf_size": 2097152, 00:16:56.670 "send_buf_size": 2097152, 00:16:56.670 "tls_version": 0, 00:16:56.670 "zerocopy_threshold": 0 00:16:56.670 } 00:16:56.670 } 00:16:56.670 ] 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "subsystem": "vmd", 00:16:56.670 "config": [] 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "subsystem": "accel", 00:16:56.670 "config": [ 00:16:56.670 { 00:16:56.670 "method": "accel_set_options", 00:16:56.670 "params": { 00:16:56.670 "buf_count": 2048, 00:16:56.670 "large_cache_size": 16, 00:16:56.670 "sequence_count": 2048, 00:16:56.670 "small_cache_size": 128, 00:16:56.670 "task_count": 2048 00:16:56.670 } 00:16:56.670 } 00:16:56.670 ] 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "subsystem": "bdev", 00:16:56.670 "config": [ 00:16:56.670 { 00:16:56.670 "method": "bdev_set_options", 00:16:56.670 "params": { 00:16:56.670 "bdev_auto_examine": true, 00:16:56.670 "bdev_io_cache_size": 256, 00:16:56.670 "bdev_io_pool_size": 65535, 00:16:56.670 "iobuf_large_cache_size": 16, 00:16:56.670 "iobuf_small_cache_size": 128 00:16:56.670 } 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "method": "bdev_raid_set_options", 00:16:56.670 "params": { 00:16:56.670 "process_max_bandwidth_mb_sec": 0, 00:16:56.670 "process_window_size_kb": 1024 00:16:56.670 } 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "method": "bdev_iscsi_set_options", 00:16:56.670 "params": { 00:16:56.670 "timeout_sec": 30 00:16:56.670 } 00:16:56.670 }, 00:16:56.670 { 00:16:56.670 "method": "bdev_nvme_set_options", 00:16:56.670 "params": { 00:16:56.670 "action_on_timeout": "none", 00:16:56.670 "allow_accel_sequence": false, 00:16:56.670 "arbitration_burst": 0, 00:16:56.670 "bdev_retry_count": 3, 00:16:56.670 "ctrlr_loss_timeout_sec": 0, 00:16:56.670 "delay_cmd_submit": true, 00:16:56.670 "dhchap_dhgroups": [ 00:16:56.670 "null", 00:16:56.670 "ffdhe2048", 00:16:56.670 "ffdhe3072", 00:16:56.670 "ffdhe4096", 00:16:56.670 "ffdhe6144", 00:16:56.670 "ffdhe8192" 00:16:56.670 ], 00:16:56.670 "dhchap_digests": [ 00:16:56.670 "sha256", 00:16:56.670 "sha384", 00:16:56.670 "sha512" 00:16:56.670 ], 00:16:56.670 "disable_auto_failback": false, 00:16:56.670 "fast_io_fail_timeout_sec": 0, 00:16:56.670 "generate_uuids": false, 00:16:56.670 "high_priority_weight": 0, 00:16:56.670 "io_path_stat": false, 00:16:56.670 "io_queue_requests": 512, 00:16:56.670 "keep_alive_timeout_ms": 10000, 00:16:56.671 "low_priority_weight": 0, 00:16:56.671 "medium_priority_weight": 0, 00:16:56.671 "nvme_adminq_poll_period_us": 10000, 00:16:56.671 "nvme_error_stat": false, 00:16:56.671 "nvme_ioq_poll_period_us": 0, 00:16:56.671 "rdma_cm_event_timeout_ms": 0, 00:16:56.671 "rdma_max_cq_size": 0, 00:16:56.671 "rdma_srq_size": 0, 00:16:56.671 "reconnect_delay_sec": 0, 00:16:56.671 "timeout_admin_us": 0, 00:16:56.671 "timeout_us": 0, 00:16:56.671 "transport_ack_timeout": 0, 00:16:56.671 "transport_retry_count": 4, 00:16:56.671 "transport_tos": 0 00:16:56.671 } 00:16:56.671 }, 00:16:56.671 { 00:16:56.671 "method": "bdev_nvme_attach_controller", 00:16:56.671 "params": { 00:16:56.671 "adrfam": "IPv4", 00:16:56.671 "ctrlr_loss_timeout_sec": 0, 00:16:56.671 "ddgst": false, 00:16:56.671 "fast_io_fail_timeout_sec": 0, 00:16:56.671 "hdgst": false, 00:16:56.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.671 "name": "TLSTEST", 00:16:56.671 "prchk_guard": false, 00:16:56.671 "prchk_reftag": false, 00:16:56.671 "psk": "/tmp/tmp.AUDfLvpYTE", 00:16:56.671 "reconnect_delay_sec": 0, 00:16:56.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.671 "traddr": "10.0.0.2", 00:16:56.671 "trsvcid": "4420", 00:16:56.671 "trtype": "TCP" 00:16:56.671 } 00:16:56.671 }, 00:16:56.671 { 00:16:56.671 "method": "bdev_nvme_set_hotplug", 00:16:56.671 "params": { 00:16:56.671 "enable": false, 00:16:56.671 "period_us": 100000 00:16:56.671 } 00:16:56.671 }, 00:16:56.671 { 00:16:56.671 "method": "bdev_wait_for_examine" 00:16:56.671 } 00:16:56.671 ] 00:16:56.671 }, 00:16:56.671 { 00:16:56.671 "subsystem": "nbd", 00:16:56.671 "config": [] 00:16:56.671 } 00:16:56.671 ] 00:16:56.671 }' 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 84816 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84816 ']' 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84816 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84816 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:56.671 killing process with pid 84816 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84816' 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84816 00:16:56.671 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.671 00:16:56.671 Latency(us) 00:16:56.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.671 =================================================================================================================== 00:16:56.671 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.671 [2024-07-25 07:31:29.377394] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:56.671 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84816 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 84719 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84719 ']' 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84719 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84719 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:56.931 killing process with pid 84719 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84719' 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84719 00:16:56.931 [2024-07-25 07:31:29.603789] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:56.931 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84719 00:16:57.189 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:57.189 "subsystems": [ 00:16:57.189 { 00:16:57.189 "subsystem": "keyring", 00:16:57.189 "config": [] 00:16:57.189 }, 00:16:57.189 { 00:16:57.189 "subsystem": "iobuf", 00:16:57.189 "config": [ 00:16:57.189 { 00:16:57.189 "method": "iobuf_set_options", 00:16:57.189 "params": { 00:16:57.190 "large_bufsize": 135168, 00:16:57.190 "large_pool_count": 1024, 00:16:57.190 "small_bufsize": 8192, 00:16:57.190 "small_pool_count": 8192 00:16:57.190 } 00:16:57.190 } 00:16:57.190 ] 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "subsystem": "sock", 00:16:57.190 "config": [ 00:16:57.190 { 00:16:57.190 "method": "sock_set_default_impl", 00:16:57.190 "params": { 00:16:57.190 "impl_name": "posix" 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "sock_impl_set_options", 00:16:57.190 "params": { 00:16:57.190 "enable_ktls": false, 00:16:57.190 "enable_placement_id": 0, 00:16:57.190 "enable_quickack": false, 00:16:57.190 "enable_recv_pipe": true, 00:16:57.190 "enable_zerocopy_send_client": false, 00:16:57.190 "enable_zerocopy_send_server": true, 00:16:57.190 "impl_name": "ssl", 00:16:57.190 "recv_buf_size": 4096, 00:16:57.190 "send_buf_size": 4096, 00:16:57.190 "tls_version": 0, 00:16:57.190 "zerocopy_threshold": 0 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "sock_impl_set_options", 00:16:57.190 "params": { 00:16:57.190 "enable_ktls": false, 00:16:57.190 "enable_placement_id": 0, 00:16:57.190 "enable_quickack": false, 00:16:57.190 "enable_recv_pipe": true, 00:16:57.190 "enable_zerocopy_send_client": false, 00:16:57.190 "enable_zerocopy_send_server": true, 00:16:57.190 "impl_name": "posix", 00:16:57.190 "recv_buf_size": 2097152, 00:16:57.190 "send_buf_size": 2097152, 00:16:57.190 "tls_version": 0, 00:16:57.190 "zerocopy_threshold": 0 00:16:57.190 } 00:16:57.190 } 00:16:57.190 ] 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "subsystem": "vmd", 00:16:57.190 "config": [] 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "subsystem": "accel", 00:16:57.190 "config": [ 00:16:57.190 { 00:16:57.190 "method": "accel_set_options", 00:16:57.190 "params": { 00:16:57.190 "buf_count": 2048, 00:16:57.190 "large_cache_size": 16, 00:16:57.190 "sequence_count": 2048, 00:16:57.190 "small_cache_size": 128, 00:16:57.190 "task_count": 2048 00:16:57.190 } 00:16:57.190 } 00:16:57.190 ] 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "subsystem": "bdev", 00:16:57.190 "config": [ 00:16:57.190 { 00:16:57.190 "method": "bdev_set_options", 00:16:57.190 "params": { 00:16:57.190 "bdev_auto_examine": true, 00:16:57.190 "bdev_io_cache_size": 256, 00:16:57.190 "bdev_io_pool_size": 65535, 00:16:57.190 "iobuf_large_cache_size": 16, 00:16:57.190 "iobuf_small_cache_size": 128 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "bdev_raid_set_options", 00:16:57.190 "params": { 00:16:57.190 "process_max_bandwidth_mb_sec": 0, 00:16:57.190 "process_window_size_kb": 1024 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "bdev_iscsi_set_options", 00:16:57.190 "params": { 00:16:57.190 "timeout_sec": 30 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "bdev_nvme_set_options", 00:16:57.190 "params": { 00:16:57.190 "action_on_timeout": "none", 00:16:57.190 "allow_accel_sequence": false, 00:16:57.190 "arbitration_burst": 0, 00:16:57.190 "bdev_retry_count": 3, 00:16:57.190 "ctrlr_loss_timeout_sec": 0, 00:16:57.190 "delay_cmd_submit": true, 00:16:57.190 "dhchap_dhgroups": [ 00:16:57.190 "null", 00:16:57.190 "ffdhe2048", 00:16:57.190 "ffdhe3072", 00:16:57.190 "ffdhe4096", 00:16:57.190 "ffdhe6144", 00:16:57.190 "ffdhe8192" 00:16:57.190 ], 00:16:57.190 "dhchap_digests": [ 00:16:57.190 "sha256", 00:16:57.190 "sha384", 00:16:57.190 "sha512" 00:16:57.190 ], 00:16:57.190 "disable_auto_failback": false, 00:16:57.190 "fast_io_fail_timeout_sec": 0, 00:16:57.190 "generate_uuids": false, 00:16:57.190 "high_priority_weight": 0, 00:16:57.190 "io_path_stat": false, 00:16:57.190 "io_queue_requests": 0, 00:16:57.190 "keep_alive_timeout_ms": 10000, 00:16:57.190 "low_priority_weight": 0, 00:16:57.190 "medium_priority_weight": 0, 00:16:57.190 "nvme_adminq_poll_period_us": 10000, 00:16:57.190 "nvme_error_stat": false, 00:16:57.190 "nvme_ioq_poll_period_us": 0, 00:16:57.190 "rdma_cm_event_timeout_ms": 0, 00:16:57.190 "rdma_max_cq_size": 0, 00:16:57.190 "rdma_srq_size": 0, 00:16:57.190 "reconnect_delay_sec": 0, 00:16:57.190 "timeout_admin_us": 0, 00:16:57.190 "timeout_us": 0, 00:16:57.190 "transport_ack_timeout": 0, 00:16:57.190 "transport_retry_count": 4, 00:16:57.190 "transport_tos": 0 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "bdev_nvme_set_hotplug", 00:16:57.190 "params": { 00:16:57.190 "enable": false, 00:16:57.190 "period_us": 100000 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "bdev_malloc_create", 00:16:57.190 "params": { 00:16:57.190 "block_size": 4096, 00:16:57.190 "dif_is_head_of_md": false, 00:16:57.190 "dif_pi_format": 0, 00:16:57.190 "dif_type": 0, 00:16:57.190 "md_size": 0, 00:16:57.190 "name": "malloc0", 00:16:57.190 "num_blocks": 8192, 00:16:57.190 "optimal_io_boundary": 0, 00:16:57.190 "physical_block_size": 4096, 00:16:57.190 "uuid": "c00e177b-a64e-4408-8ba0-12f3d050056d" 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "bdev_wait_for_examine" 00:16:57.190 } 00:16:57.190 ] 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "subsystem": "nbd", 00:16:57.190 "config": [] 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "subsystem": "scheduler", 00:16:57.190 "config": [ 00:16:57.190 { 00:16:57.190 "method": "framework_set_scheduler", 00:16:57.190 "params": { 00:16:57.190 "name": "static" 00:16:57.190 } 00:16:57.190 } 00:16:57.190 ] 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "subsystem": "nvmf", 00:16:57.190 "config": [ 00:16:57.190 { 00:16:57.190 "method": "nvmf_set_config", 00:16:57.190 "params": { 00:16:57.190 "admin_cmd_passthru": { 00:16:57.190 "identify_ctrlr": false 00:16:57.190 }, 00:16:57.190 "discovery_filter": "match_any" 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "nvmf_set_max_subsystems", 00:16:57.190 "params": { 00:16:57.190 "max_subsystems": 1024 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "nvmf_set_crdt", 00:16:57.190 "params": { 00:16:57.190 "crdt1": 0, 00:16:57.190 "crdt2": 0, 00:16:57.190 "crdt3": 0 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "nvmf_create_transport", 00:16:57.190 "params": { 00:16:57.190 "abort_timeout_sec": 1, 00:16:57.190 "ack_timeout": 0, 00:16:57.190 "buf_cache_size": 4294967295, 00:16:57.190 "c2h_success": false, 00:16:57.190 "data_wr_pool_size": 0, 00:16:57.190 "dif_insert_or_strip": false, 00:16:57.190 "in_capsule_data_size": 4096, 00:16:57.190 "io_unit_size": 131072, 00:16:57.190 "max_aq_depth": 128, 00:16:57.190 "max_io_qpairs_per_ctrlr": 127, 00:16:57.190 "max_io_size": 131072, 00:16:57.190 "max_queue_depth": 128, 00:16:57.190 "num_shared_buffers": 511, 00:16:57.190 "sock_priority": 0, 00:16:57.190 "trtype": "TCP", 00:16:57.190 "zcopy": false 00:16:57.190 } 00:16:57.190 }, 00:16:57.190 { 00:16:57.190 "method": "nvmf_create_subsystem", 00:16:57.190 "params": { 00:16:57.190 "allow_any_host": false, 00:16:57.190 "ana_reporting": false, 00:16:57.190 "max_cntlid": 65519, 00:16:57.190 "max_namespaces": 10, 00:16:57.190 "min_cntlid": 1, 00:16:57.191 "model_number": "SPDK bdev Controller", 00:16:57.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.191 "serial_number": "SPDK00000000000001" 00:16:57.191 } 00:16:57.191 }, 00:16:57.191 { 00:16:57.191 "method": "nvmf_subsystem_add_host", 00:16:57.191 "params": { 00:16:57.191 "host": "nqn.2016-06.io.spdk:host1", 00:16:57.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.191 "psk": "/tmp/tmp.AUDfLvpYTE" 00:16:57.191 } 00:16:57.191 }, 00:16:57.191 { 00:16:57.191 "method": "nvmf_subsystem_add_ns", 00:16:57.191 "params": { 00:16:57.191 "namespace": { 00:16:57.191 "bdev_name": "malloc0", 00:16:57.191 "nguid": "C00E177BA64E44088BA012F3D050056D", 00:16:57.191 "no_auto_visible": false, 00:16:57.191 "nsid": 1, 00:16:57.191 "uuid": "c00e177b-a64e-4408-8ba0-12f3d050056d" 00:16:57.191 }, 00:16:57.191 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:57.191 } 00:16:57.191 }, 00:16:57.191 { 00:16:57.191 "method": "nvmf_subsystem_add_listener", 00:16:57.191 "params": { 00:16:57.191 "listen_address": { 00:16:57.191 "adrfam": "IPv4", 00:16:57.191 "traddr": "10.0.0.2", 00:16:57.191 "trsvcid": "4420", 00:16:57.191 "trtype": "TCP" 00:16:57.191 }, 00:16:57.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.191 "secure_channel": true 00:16:57.191 } 00:16:57.191 } 00:16:57.191 ] 00:16:57.191 } 00:16:57.191 ] 00:16:57.191 }' 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84889 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84889 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84889 ']' 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.191 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.191 [2024-07-25 07:31:29.883730] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:57.191 [2024-07-25 07:31:29.884312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.450 [2024-07-25 07:31:30.014169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.450 [2024-07-25 07:31:30.135606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.450 [2024-07-25 07:31:30.135678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.450 [2024-07-25 07:31:30.135688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.450 [2024-07-25 07:31:30.135696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.450 [2024-07-25 07:31:30.135704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.450 [2024-07-25 07:31:30.135793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.708 [2024-07-25 07:31:30.351018] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.708 [2024-07-25 07:31:30.366904] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:57.708 [2024-07-25 07:31:30.382902] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:57.708 [2024-07-25 07:31:30.383177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84932 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84932 /var/tmp/bdevperf.sock 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84932 ']' 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:58.274 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:58.274 "subsystems": [ 00:16:58.274 { 00:16:58.274 "subsystem": "keyring", 00:16:58.274 "config": [] 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "subsystem": "iobuf", 00:16:58.274 "config": [ 00:16:58.274 { 00:16:58.274 "method": "iobuf_set_options", 00:16:58.274 "params": { 00:16:58.274 "large_bufsize": 135168, 00:16:58.274 "large_pool_count": 1024, 00:16:58.274 "small_bufsize": 8192, 00:16:58.274 "small_pool_count": 8192 00:16:58.274 } 00:16:58.274 } 00:16:58.274 ] 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "subsystem": "sock", 00:16:58.274 "config": [ 00:16:58.274 { 00:16:58.274 "method": "sock_set_default_impl", 00:16:58.274 "params": { 00:16:58.274 "impl_name": "posix" 00:16:58.274 } 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "method": "sock_impl_set_options", 00:16:58.274 "params": { 00:16:58.274 "enable_ktls": false, 00:16:58.274 "enable_placement_id": 0, 00:16:58.274 "enable_quickack": false, 00:16:58.274 "enable_recv_pipe": true, 00:16:58.274 "enable_zerocopy_send_client": false, 00:16:58.274 "enable_zerocopy_send_server": true, 00:16:58.274 "impl_name": "ssl", 00:16:58.274 "recv_buf_size": 4096, 00:16:58.274 "send_buf_size": 4096, 00:16:58.274 "tls_version": 0, 00:16:58.274 "zerocopy_threshold": 0 00:16:58.274 } 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "method": "sock_impl_set_options", 00:16:58.274 "params": { 00:16:58.274 "enable_ktls": false, 00:16:58.274 "enable_placement_id": 0, 00:16:58.274 "enable_quickack": false, 00:16:58.274 "enable_recv_pipe": true, 00:16:58.274 "enable_zerocopy_send_client": false, 00:16:58.274 "enable_zerocopy_send_server": true, 00:16:58.274 "impl_name": "posix", 00:16:58.274 "recv_buf_size": 2097152, 00:16:58.274 "send_buf_size": 2097152, 00:16:58.274 "tls_version": 0, 00:16:58.274 "zerocopy_threshold": 0 00:16:58.274 } 00:16:58.274 } 00:16:58.274 ] 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "subsystem": "vmd", 00:16:58.274 "config": [] 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "subsystem": "accel", 00:16:58.274 "config": [ 00:16:58.274 { 00:16:58.274 "method": "accel_set_options", 00:16:58.274 "params": { 00:16:58.274 "buf_count": 2048, 00:16:58.274 "large_cache_size": 16, 00:16:58.274 "sequence_count": 2048, 00:16:58.274 "small_cache_size": 128, 00:16:58.274 "task_count": 2048 00:16:58.274 } 00:16:58.274 } 00:16:58.274 ] 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "subsystem": "bdev", 00:16:58.274 "config": [ 00:16:58.274 { 00:16:58.274 "method": "bdev_set_options", 00:16:58.274 "params": { 00:16:58.274 "bdev_auto_examine": true, 00:16:58.274 "bdev_io_cache_size": 256, 00:16:58.274 "bdev_io_pool_size": 65535, 00:16:58.274 "iobuf_large_cache_size": 16, 00:16:58.274 "iobuf_small_cache_size": 128 00:16:58.274 } 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "method": "bdev_raid_set_options", 00:16:58.274 "params": { 00:16:58.274 "process_max_bandwidth_mb_sec": 0, 00:16:58.274 "process_window_size_kb": 1024 00:16:58.274 } 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "method": "bdev_iscsi_set_options", 00:16:58.274 "params": { 00:16:58.274 "timeout_sec": 30 00:16:58.274 } 00:16:58.274 }, 00:16:58.274 { 00:16:58.274 "method": "bdev_nvme_set_options", 00:16:58.274 "params": { 00:16:58.274 "action_on_timeout": "none", 00:16:58.274 "allow_accel_sequence": false, 00:16:58.274 "arbitration_burst": 0, 00:16:58.274 "bdev_retry_count": 3, 00:16:58.274 "ctrlr_loss_timeout_sec": 0, 00:16:58.274 "delay_cmd_submit": true, 00:16:58.274 "dhchap_dhgroups": [ 00:16:58.274 "null", 00:16:58.274 "ffdhe2048", 00:16:58.274 "ffdhe3072", 00:16:58.274 "ffdhe4096", 00:16:58.274 "ffdhe6144", 00:16:58.274 "ffdhe8192" 00:16:58.274 ], 00:16:58.274 "dhchap_digests": [ 00:16:58.274 "sha256", 00:16:58.274 "sha384", 00:16:58.274 "sha512" 00:16:58.274 ], 00:16:58.274 "disable_auto_failback": false, 00:16:58.274 "fast_io_fail_timeout_sec": 0, 00:16:58.274 "generate_uuids": false, 00:16:58.274 "high_priority_weight": 0, 00:16:58.274 "io_path_stat": false, 00:16:58.274 "io_queue_requests": 512, 00:16:58.274 "keep_alive_timeout_ms": 10000, 00:16:58.274 "low_priority_weight": 0, 00:16:58.274 "medium_priority_weight": 0, 00:16:58.274 "nvme_adminq_poll_period_us": 10000, 00:16:58.274 "nvme_error_stat": false, 00:16:58.274 "nvme_ioq_poll_period_us": 0, 00:16:58.274 "rdma_cm_event_timeout_ms": 0, 00:16:58.274 "rdma_max_cq_size": 0, 00:16:58.274 "rdma_srq_size": 0, 00:16:58.274 "reconnect_delay_sec": 0, 00:16:58.274 "timeout_admin_us": 0, 00:16:58.274 "timeout_us": 0, 00:16:58.275 "transport_ack_timeout": 0, 00:16:58.275 "transport_retry_count": 4, 00:16:58.275 "transport_tos": 0 00:16:58.275 } 00:16:58.275 }, 00:16:58.275 { 00:16:58.275 "method": "bdev_nvme_attach_controller", 00:16:58.275 "params": { 00:16:58.275 "adrfam": "IPv4", 00:16:58.275 "ctrlr_loss_timeout_sec": 0, 00:16:58.275 "ddgst": false, 00:16:58.275 "fast_io_fail_timeout_sec": 0, 00:16:58.275 "hdgst": false, 00:16:58.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.275 "name": "TLSTEST", 00:16:58.275 "prchk_guard": false, 00:16:58.275 "prchk_reftag": false, 00:16:58.275 "psk": "/tmp/tmp.AUDfLvpYTE", 00:16:58.275 "reconnect_delay_sec": 0, 00:16:58.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.275 "traddr": "10.0.0.2", 00:16:58.275 "trsvcid": "4420", 00:16:58.275 "trtype": "TCP" 00:16:58.275 } 00:16:58.275 }, 00:16:58.275 { 00:16:58.275 "method": "bdev_nvme_set_hotplug", 00:16:58.275 "params": { 00:16:58.275 "enable": false, 00:16:58.275 "period_us": 100000 00:16:58.275 } 00:16:58.275 }, 00:16:58.275 { 00:16:58.275 "method": "bdev_wait_for_examine" 00:16:58.275 } 00:16:58.275 ] 00:16:58.275 }, 00:16:58.275 { 00:16:58.275 "subsystem": "nbd", 00:16:58.275 "config": [] 00:16:58.275 } 00:16:58.275 ] 00:16:58.275 }' 00:16:58.275 [2024-07-25 07:31:30.924578] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:16:58.275 [2024-07-25 07:31:30.924659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84932 ] 00:16:58.534 [2024-07-25 07:31:31.049531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.534 [2024-07-25 07:31:31.182415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.793 [2024-07-25 07:31:31.336305] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.793 [2024-07-25 07:31:31.336422] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:59.360 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.360 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:59.360 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:59.360 Running I/O for 10 seconds... 00:17:09.384 00:17:09.384 Latency(us) 00:17:09.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.384 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:09.384 Verification LBA range: start 0x0 length 0x2000 00:17:09.384 TLSTESTn1 : 10.01 5004.51 19.55 0.00 0.00 25533.71 4521.70 19803.89 00:17:09.384 =================================================================================================================== 00:17:09.384 Total : 5004.51 19.55 0.00 0.00 25533.71 4521.70 19803.89 00:17:09.384 0 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 84932 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84932 ']' 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84932 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84932 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:09.384 killing process with pid 84932 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84932' 00:17:09.384 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84932 00:17:09.384 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.384 00:17:09.384 Latency(us) 00:17:09.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.384 =================================================================================================================== 00:17:09.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.384 [2024-07-25 07:31:41.964957] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84932 00:17:09.384 scheduled for removal in v24.09 hit 1 times 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 84889 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84889 ']' 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84889 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84889 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:09.645 killing process with pid 84889 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84889' 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84889 00:17:09.645 [2024-07-25 07:31:42.196048] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:09.645 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84889 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85077 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85077 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85077 ']' 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.905 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.905 [2024-07-25 07:31:42.458990] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:09.905 [2024-07-25 07:31:42.459078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.905 [2024-07-25 07:31:42.600222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.165 [2024-07-25 07:31:42.704835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.165 [2024-07-25 07:31:42.704891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.165 [2024-07-25 07:31:42.704898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.165 [2024-07-25 07:31:42.704903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.165 [2024-07-25 07:31:42.704907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.165 [2024-07-25 07:31:42.704946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.AUDfLvpYTE 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AUDfLvpYTE 00:17:10.733 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:10.990 [2024-07-25 07:31:43.683280] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.990 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:11.246 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:11.504 [2024-07-25 07:31:44.190462] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:11.504 [2024-07-25 07:31:44.190794] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.504 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:12.068 malloc0 00:17:12.068 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:12.326 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AUDfLvpYTE 00:17:12.584 [2024-07-25 07:31:45.066580] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85180 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85180 /var/tmp/bdevperf.sock 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85180 ']' 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.584 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.584 [2024-07-25 07:31:45.137451] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:12.584 [2024-07-25 07:31:45.137615] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85180 ] 00:17:12.584 [2024-07-25 07:31:45.270329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.842 [2024-07-25 07:31:45.393229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.407 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.407 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:13.407 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AUDfLvpYTE 00:17:13.976 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:13.976 [2024-07-25 07:31:46.617212] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.976 nvme0n1 00:17:14.236 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:14.236 Running I/O for 1 seconds... 00:17:15.176 00:17:15.176 Latency(us) 00:17:15.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.176 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.176 Verification LBA range: start 0x0 length 0x2000 00:17:15.176 nvme0n1 : 1.01 5118.85 20.00 0.00 0.00 24798.00 5122.68 20032.84 00:17:15.176 =================================================================================================================== 00:17:15.176 Total : 5118.85 20.00 0.00 0.00 24798.00 5122.68 20032.84 00:17:15.176 0 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 85180 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85180 ']' 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85180 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85180 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:15.176 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:15.177 killing process with pid 85180 00:17:15.177 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85180' 00:17:15.177 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85180 00:17:15.177 Received shutdown signal, test time was about 1.000000 seconds 00:17:15.177 00:17:15.177 Latency(us) 00:17:15.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.177 =================================================================================================================== 00:17:15.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.177 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85180 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 85077 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85077 ']' 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85077 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85077 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.437 killing process with pid 85077 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85077' 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85077 00:17:15.437 [2024-07-25 07:31:48.131317] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:15.437 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85077 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85256 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85256 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85256 ']' 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.697 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 [2024-07-25 07:31:48.407724] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:15.697 [2024-07-25 07:31:48.407811] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.957 [2024-07-25 07:31:48.549432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.957 [2024-07-25 07:31:48.655072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.957 [2024-07-25 07:31:48.655143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.957 [2024-07-25 07:31:48.655152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.957 [2024-07-25 07:31:48.655159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.957 [2024-07-25 07:31:48.655164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.957 [2024-07-25 07:31:48.655191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.894 [2024-07-25 07:31:49.370705] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.894 malloc0 00:17:16.894 [2024-07-25 07:31:49.399882] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.894 [2024-07-25 07:31:49.400075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85305 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85305 /var/tmp/bdevperf.sock 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85305 ']' 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.894 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.894 [2024-07-25 07:31:49.471198] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:16.894 [2024-07-25 07:31:49.471303] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85305 ] 00:17:16.894 [2024-07-25 07:31:49.595153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.153 [2024-07-25 07:31:49.716536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.722 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.722 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:17.722 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AUDfLvpYTE 00:17:17.981 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:18.241 [2024-07-25 07:31:50.760390] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.241 nvme0n1 00:17:18.241 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:18.241 Running I/O for 1 seconds... 00:17:19.622 00:17:19.622 Latency(us) 00:17:19.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.622 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.622 Verification LBA range: start 0x0 length 0x2000 00:17:19.622 nvme0n1 : 1.01 5378.44 21.01 0.00 0.00 23599.27 4950.97 17972.32 00:17:19.622 =================================================================================================================== 00:17:19.622 Total : 5378.44 21.01 0.00 0.00 23599.27 4950.97 17972.32 00:17:19.622 0 00:17:19.622 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:17:19.622 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.622 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.622 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.622 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:17:19.622 "subsystems": [ 00:17:19.622 { 00:17:19.622 "subsystem": "keyring", 00:17:19.622 "config": [ 00:17:19.622 { 00:17:19.622 "method": "keyring_file_add_key", 00:17:19.622 "params": { 00:17:19.622 "name": "key0", 00:17:19.622 "path": "/tmp/tmp.AUDfLvpYTE" 00:17:19.622 } 00:17:19.622 } 00:17:19.622 ] 00:17:19.622 }, 00:17:19.622 { 00:17:19.622 "subsystem": "iobuf", 00:17:19.622 "config": [ 00:17:19.622 { 00:17:19.622 "method": "iobuf_set_options", 00:17:19.622 "params": { 00:17:19.622 "large_bufsize": 135168, 00:17:19.622 "large_pool_count": 1024, 00:17:19.622 "small_bufsize": 8192, 00:17:19.622 "small_pool_count": 8192 00:17:19.622 } 00:17:19.622 } 00:17:19.622 ] 00:17:19.622 }, 00:17:19.622 { 00:17:19.622 "subsystem": "sock", 00:17:19.622 "config": [ 00:17:19.622 { 00:17:19.622 "method": "sock_set_default_impl", 00:17:19.622 "params": { 00:17:19.622 "impl_name": "posix" 00:17:19.622 } 00:17:19.622 }, 00:17:19.622 { 00:17:19.622 "method": "sock_impl_set_options", 00:17:19.622 "params": { 00:17:19.622 "enable_ktls": false, 00:17:19.622 "enable_placement_id": 0, 00:17:19.622 "enable_quickack": false, 00:17:19.622 "enable_recv_pipe": true, 00:17:19.622 "enable_zerocopy_send_client": false, 00:17:19.622 "enable_zerocopy_send_server": true, 00:17:19.622 "impl_name": "ssl", 00:17:19.622 "recv_buf_size": 4096, 00:17:19.622 "send_buf_size": 4096, 00:17:19.622 "tls_version": 0, 00:17:19.622 "zerocopy_threshold": 0 00:17:19.622 } 00:17:19.622 }, 00:17:19.622 { 00:17:19.622 "method": "sock_impl_set_options", 00:17:19.622 "params": { 00:17:19.622 "enable_ktls": false, 00:17:19.622 "enable_placement_id": 0, 00:17:19.622 "enable_quickack": false, 00:17:19.622 "enable_recv_pipe": true, 00:17:19.622 "enable_zerocopy_send_client": false, 00:17:19.622 "enable_zerocopy_send_server": true, 00:17:19.622 "impl_name": "posix", 00:17:19.622 "recv_buf_size": 2097152, 00:17:19.622 "send_buf_size": 2097152, 00:17:19.622 "tls_version": 0, 00:17:19.622 "zerocopy_threshold": 0 00:17:19.622 } 00:17:19.622 } 00:17:19.622 ] 00:17:19.622 }, 00:17:19.622 { 00:17:19.622 "subsystem": "vmd", 00:17:19.622 "config": [] 00:17:19.622 }, 00:17:19.622 { 00:17:19.622 "subsystem": "accel", 00:17:19.622 "config": [ 00:17:19.622 { 00:17:19.622 "method": "accel_set_options", 00:17:19.622 "params": { 00:17:19.622 "buf_count": 2048, 00:17:19.622 "large_cache_size": 16, 00:17:19.622 "sequence_count": 2048, 00:17:19.622 "small_cache_size": 128, 00:17:19.622 "task_count": 2048 00:17:19.622 } 00:17:19.622 } 00:17:19.622 ] 00:17:19.622 }, 00:17:19.622 { 00:17:19.622 "subsystem": "bdev", 00:17:19.623 "config": [ 00:17:19.623 { 00:17:19.623 "method": "bdev_set_options", 00:17:19.623 "params": { 00:17:19.623 "bdev_auto_examine": true, 00:17:19.623 "bdev_io_cache_size": 256, 00:17:19.623 "bdev_io_pool_size": 65535, 00:17:19.623 "iobuf_large_cache_size": 16, 00:17:19.623 "iobuf_small_cache_size": 128 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "bdev_raid_set_options", 00:17:19.623 "params": { 00:17:19.623 "process_max_bandwidth_mb_sec": 0, 00:17:19.623 "process_window_size_kb": 1024 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "bdev_iscsi_set_options", 00:17:19.623 "params": { 00:17:19.623 "timeout_sec": 30 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "bdev_nvme_set_options", 00:17:19.623 "params": { 00:17:19.623 "action_on_timeout": "none", 00:17:19.623 "allow_accel_sequence": false, 00:17:19.623 "arbitration_burst": 0, 00:17:19.623 "bdev_retry_count": 3, 00:17:19.623 "ctrlr_loss_timeout_sec": 0, 00:17:19.623 "delay_cmd_submit": true, 00:17:19.623 "dhchap_dhgroups": [ 00:17:19.623 "null", 00:17:19.623 "ffdhe2048", 00:17:19.623 "ffdhe3072", 00:17:19.623 "ffdhe4096", 00:17:19.623 "ffdhe6144", 00:17:19.623 "ffdhe8192" 00:17:19.623 ], 00:17:19.623 "dhchap_digests": [ 00:17:19.623 "sha256", 00:17:19.623 "sha384", 00:17:19.623 "sha512" 00:17:19.623 ], 00:17:19.623 "disable_auto_failback": false, 00:17:19.623 "fast_io_fail_timeout_sec": 0, 00:17:19.623 "generate_uuids": false, 00:17:19.623 "high_priority_weight": 0, 00:17:19.623 "io_path_stat": false, 00:17:19.623 "io_queue_requests": 0, 00:17:19.623 "keep_alive_timeout_ms": 10000, 00:17:19.623 "low_priority_weight": 0, 00:17:19.623 "medium_priority_weight": 0, 00:17:19.623 "nvme_adminq_poll_period_us": 10000, 00:17:19.623 "nvme_error_stat": false, 00:17:19.623 "nvme_ioq_poll_period_us": 0, 00:17:19.623 "rdma_cm_event_timeout_ms": 0, 00:17:19.623 "rdma_max_cq_size": 0, 00:17:19.623 "rdma_srq_size": 0, 00:17:19.623 "reconnect_delay_sec": 0, 00:17:19.623 "timeout_admin_us": 0, 00:17:19.623 "timeout_us": 0, 00:17:19.623 "transport_ack_timeout": 0, 00:17:19.623 "transport_retry_count": 4, 00:17:19.623 "transport_tos": 0 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "bdev_nvme_set_hotplug", 00:17:19.623 "params": { 00:17:19.623 "enable": false, 00:17:19.623 "period_us": 100000 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "bdev_malloc_create", 00:17:19.623 "params": { 00:17:19.623 "block_size": 4096, 00:17:19.623 "dif_is_head_of_md": false, 00:17:19.623 "dif_pi_format": 0, 00:17:19.623 "dif_type": 0, 00:17:19.623 "md_size": 0, 00:17:19.623 "name": "malloc0", 00:17:19.623 "num_blocks": 8192, 00:17:19.623 "optimal_io_boundary": 0, 00:17:19.623 "physical_block_size": 4096, 00:17:19.623 "uuid": "43454ad7-e60d-4ed4-a283-8a2d6a055443" 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "bdev_wait_for_examine" 00:17:19.623 } 00:17:19.623 ] 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "subsystem": "nbd", 00:17:19.623 "config": [] 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "subsystem": "scheduler", 00:17:19.623 "config": [ 00:17:19.623 { 00:17:19.623 "method": "framework_set_scheduler", 00:17:19.623 "params": { 00:17:19.623 "name": "static" 00:17:19.623 } 00:17:19.623 } 00:17:19.623 ] 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "subsystem": "nvmf", 00:17:19.623 "config": [ 00:17:19.623 { 00:17:19.623 "method": "nvmf_set_config", 00:17:19.623 "params": { 00:17:19.623 "admin_cmd_passthru": { 00:17:19.623 "identify_ctrlr": false 00:17:19.623 }, 00:17:19.623 "discovery_filter": "match_any" 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "nvmf_set_max_subsystems", 00:17:19.623 "params": { 00:17:19.623 "max_subsystems": 1024 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "nvmf_set_crdt", 00:17:19.623 "params": { 00:17:19.623 "crdt1": 0, 00:17:19.623 "crdt2": 0, 00:17:19.623 "crdt3": 0 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "nvmf_create_transport", 00:17:19.623 "params": { 00:17:19.623 "abort_timeout_sec": 1, 00:17:19.623 "ack_timeout": 0, 00:17:19.623 "buf_cache_size": 4294967295, 00:17:19.623 "c2h_success": false, 00:17:19.623 "data_wr_pool_size": 0, 00:17:19.623 "dif_insert_or_strip": false, 00:17:19.623 "in_capsule_data_size": 4096, 00:17:19.623 "io_unit_size": 131072, 00:17:19.623 "max_aq_depth": 128, 00:17:19.623 "max_io_qpairs_per_ctrlr": 127, 00:17:19.623 "max_io_size": 131072, 00:17:19.623 "max_queue_depth": 128, 00:17:19.623 "num_shared_buffers": 511, 00:17:19.623 "sock_priority": 0, 00:17:19.623 "trtype": "TCP", 00:17:19.623 "zcopy": false 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "nvmf_create_subsystem", 00:17:19.623 "params": { 00:17:19.623 "allow_any_host": false, 00:17:19.623 "ana_reporting": false, 00:17:19.623 "max_cntlid": 65519, 00:17:19.623 "max_namespaces": 32, 00:17:19.623 "min_cntlid": 1, 00:17:19.623 "model_number": "SPDK bdev Controller", 00:17:19.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.623 "serial_number": "00000000000000000000" 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "nvmf_subsystem_add_host", 00:17:19.623 "params": { 00:17:19.623 "host": "nqn.2016-06.io.spdk:host1", 00:17:19.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.623 "psk": "key0" 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "nvmf_subsystem_add_ns", 00:17:19.623 "params": { 00:17:19.623 "namespace": { 00:17:19.623 "bdev_name": "malloc0", 00:17:19.623 "nguid": "43454AD7E60D4ED4A2838A2D6A055443", 00:17:19.623 "no_auto_visible": false, 00:17:19.623 "nsid": 1, 00:17:19.623 "uuid": "43454ad7-e60d-4ed4-a283-8a2d6a055443" 00:17:19.623 }, 00:17:19.623 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:19.623 } 00:17:19.623 }, 00:17:19.623 { 00:17:19.623 "method": "nvmf_subsystem_add_listener", 00:17:19.623 "params": { 00:17:19.623 "listen_address": { 00:17:19.623 "adrfam": "IPv4", 00:17:19.623 "traddr": "10.0.0.2", 00:17:19.623 "trsvcid": "4420", 00:17:19.623 "trtype": "TCP" 00:17:19.623 }, 00:17:19.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.623 "secure_channel": false, 00:17:19.623 "sock_impl": "ssl" 00:17:19.623 } 00:17:19.623 } 00:17:19.623 ] 00:17:19.623 } 00:17:19.623 ] 00:17:19.623 }' 00:17:19.623 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:19.882 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:17:19.882 "subsystems": [ 00:17:19.882 { 00:17:19.882 "subsystem": "keyring", 00:17:19.882 "config": [ 00:17:19.882 { 00:17:19.882 "method": "keyring_file_add_key", 00:17:19.882 "params": { 00:17:19.882 "name": "key0", 00:17:19.882 "path": "/tmp/tmp.AUDfLvpYTE" 00:17:19.882 } 00:17:19.882 } 00:17:19.882 ] 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "subsystem": "iobuf", 00:17:19.882 "config": [ 00:17:19.882 { 00:17:19.882 "method": "iobuf_set_options", 00:17:19.882 "params": { 00:17:19.882 "large_bufsize": 135168, 00:17:19.882 "large_pool_count": 1024, 00:17:19.882 "small_bufsize": 8192, 00:17:19.882 "small_pool_count": 8192 00:17:19.882 } 00:17:19.882 } 00:17:19.882 ] 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "subsystem": "sock", 00:17:19.882 "config": [ 00:17:19.882 { 00:17:19.882 "method": "sock_set_default_impl", 00:17:19.882 "params": { 00:17:19.882 "impl_name": "posix" 00:17:19.882 } 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "method": "sock_impl_set_options", 00:17:19.882 "params": { 00:17:19.882 "enable_ktls": false, 00:17:19.882 "enable_placement_id": 0, 00:17:19.882 "enable_quickack": false, 00:17:19.882 "enable_recv_pipe": true, 00:17:19.882 "enable_zerocopy_send_client": false, 00:17:19.882 "enable_zerocopy_send_server": true, 00:17:19.882 "impl_name": "ssl", 00:17:19.882 "recv_buf_size": 4096, 00:17:19.882 "send_buf_size": 4096, 00:17:19.882 "tls_version": 0, 00:17:19.882 "zerocopy_threshold": 0 00:17:19.882 } 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "method": "sock_impl_set_options", 00:17:19.882 "params": { 00:17:19.882 "enable_ktls": false, 00:17:19.882 "enable_placement_id": 0, 00:17:19.882 "enable_quickack": false, 00:17:19.882 "enable_recv_pipe": true, 00:17:19.882 "enable_zerocopy_send_client": false, 00:17:19.882 "enable_zerocopy_send_server": true, 00:17:19.882 "impl_name": "posix", 00:17:19.882 "recv_buf_size": 2097152, 00:17:19.882 "send_buf_size": 2097152, 00:17:19.882 "tls_version": 0, 00:17:19.882 "zerocopy_threshold": 0 00:17:19.882 } 00:17:19.882 } 00:17:19.882 ] 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "subsystem": "vmd", 00:17:19.882 "config": [] 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "subsystem": "accel", 00:17:19.882 "config": [ 00:17:19.882 { 00:17:19.882 "method": "accel_set_options", 00:17:19.882 "params": { 00:17:19.882 "buf_count": 2048, 00:17:19.882 "large_cache_size": 16, 00:17:19.882 "sequence_count": 2048, 00:17:19.882 "small_cache_size": 128, 00:17:19.882 "task_count": 2048 00:17:19.882 } 00:17:19.882 } 00:17:19.882 ] 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "subsystem": "bdev", 00:17:19.882 "config": [ 00:17:19.882 { 00:17:19.882 "method": "bdev_set_options", 00:17:19.882 "params": { 00:17:19.882 "bdev_auto_examine": true, 00:17:19.882 "bdev_io_cache_size": 256, 00:17:19.882 "bdev_io_pool_size": 65535, 00:17:19.882 "iobuf_large_cache_size": 16, 00:17:19.882 "iobuf_small_cache_size": 128 00:17:19.882 } 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "method": "bdev_raid_set_options", 00:17:19.882 "params": { 00:17:19.882 "process_max_bandwidth_mb_sec": 0, 00:17:19.882 "process_window_size_kb": 1024 00:17:19.882 } 00:17:19.882 }, 00:17:19.882 { 00:17:19.882 "method": "bdev_iscsi_set_options", 00:17:19.882 "params": { 00:17:19.882 "timeout_sec": 30 00:17:19.882 } 00:17:19.882 }, 00:17:19.883 { 00:17:19.883 "method": "bdev_nvme_set_options", 00:17:19.883 "params": { 00:17:19.883 "action_on_timeout": "none", 00:17:19.883 "allow_accel_sequence": false, 00:17:19.883 "arbitration_burst": 0, 00:17:19.883 "bdev_retry_count": 3, 00:17:19.883 "ctrlr_loss_timeout_sec": 0, 00:17:19.883 "delay_cmd_submit": true, 00:17:19.883 "dhchap_dhgroups": [ 00:17:19.883 "null", 00:17:19.883 "ffdhe2048", 00:17:19.883 "ffdhe3072", 00:17:19.883 "ffdhe4096", 00:17:19.883 "ffdhe6144", 00:17:19.883 "ffdhe8192" 00:17:19.883 ], 00:17:19.883 "dhchap_digests": [ 00:17:19.883 "sha256", 00:17:19.883 "sha384", 00:17:19.883 "sha512" 00:17:19.883 ], 00:17:19.883 "disable_auto_failback": false, 00:17:19.883 "fast_io_fail_timeout_sec": 0, 00:17:19.883 "generate_uuids": false, 00:17:19.883 "high_priority_weight": 0, 00:17:19.883 "io_path_stat": false, 00:17:19.883 "io_queue_requests": 512, 00:17:19.883 "keep_alive_timeout_ms": 10000, 00:17:19.883 "low_priority_weight": 0, 00:17:19.883 "medium_priority_weight": 0, 00:17:19.883 "nvme_adminq_poll_period_us": 10000, 00:17:19.883 "nvme_error_stat": false, 00:17:19.883 "nvme_ioq_poll_period_us": 0, 00:17:19.883 "rdma_cm_event_timeout_ms": 0, 00:17:19.883 "rdma_max_cq_size": 0, 00:17:19.883 "rdma_srq_size": 0, 00:17:19.883 "reconnect_delay_sec": 0, 00:17:19.883 "timeout_admin_us": 0, 00:17:19.883 "timeout_us": 0, 00:17:19.883 "transport_ack_timeout": 0, 00:17:19.883 "transport_retry_count": 4, 00:17:19.883 "transport_tos": 0 00:17:19.883 } 00:17:19.883 }, 00:17:19.883 { 00:17:19.883 "method": "bdev_nvme_attach_controller", 00:17:19.883 "params": { 00:17:19.883 "adrfam": "IPv4", 00:17:19.883 "ctrlr_loss_timeout_sec": 0, 00:17:19.883 "ddgst": false, 00:17:19.883 "fast_io_fail_timeout_sec": 0, 00:17:19.883 "hdgst": false, 00:17:19.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.883 "name": "nvme0", 00:17:19.883 "prchk_guard": false, 00:17:19.883 "prchk_reftag": false, 00:17:19.883 "psk": "key0", 00:17:19.883 "reconnect_delay_sec": 0, 00:17:19.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.883 "traddr": "10.0.0.2", 00:17:19.883 "trsvcid": "4420", 00:17:19.883 "trtype": "TCP" 00:17:19.883 } 00:17:19.883 }, 00:17:19.883 { 00:17:19.883 "method": "bdev_nvme_set_hotplug", 00:17:19.883 "params": { 00:17:19.883 "enable": false, 00:17:19.883 "period_us": 100000 00:17:19.883 } 00:17:19.883 }, 00:17:19.883 { 00:17:19.883 "method": "bdev_enable_histogram", 00:17:19.883 "params": { 00:17:19.883 "enable": true, 00:17:19.883 "name": "nvme0n1" 00:17:19.883 } 00:17:19.883 }, 00:17:19.883 { 00:17:19.883 "method": "bdev_wait_for_examine" 00:17:19.883 } 00:17:19.883 ] 00:17:19.883 }, 00:17:19.883 { 00:17:19.883 "subsystem": "nbd", 00:17:19.883 "config": [] 00:17:19.883 } 00:17:19.883 ] 00:17:19.883 }' 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 85305 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85305 ']' 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85305 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85305 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:19.883 killing process with pid 85305 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85305' 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85305 00:17:19.883 Received shutdown signal, test time was about 1.000000 seconds 00:17:19.883 00:17:19.883 Latency(us) 00:17:19.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.883 =================================================================================================================== 00:17:19.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.883 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85305 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 85256 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85256 ']' 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85256 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85256 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:20.151 killing process with pid 85256 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85256' 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85256 00:17:20.151 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85256 00:17:20.451 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:17:20.451 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.451 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:17:20.451 "subsystems": [ 00:17:20.451 { 00:17:20.451 "subsystem": "keyring", 00:17:20.451 "config": [ 00:17:20.451 { 00:17:20.451 "method": "keyring_file_add_key", 00:17:20.451 "params": { 00:17:20.451 "name": "key0", 00:17:20.451 "path": "/tmp/tmp.AUDfLvpYTE" 00:17:20.451 } 00:17:20.451 } 00:17:20.451 ] 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "subsystem": "iobuf", 00:17:20.451 "config": [ 00:17:20.451 { 00:17:20.451 "method": "iobuf_set_options", 00:17:20.451 "params": { 00:17:20.451 "large_bufsize": 135168, 00:17:20.451 "large_pool_count": 1024, 00:17:20.451 "small_bufsize": 8192, 00:17:20.451 "small_pool_count": 8192 00:17:20.451 } 00:17:20.451 } 00:17:20.451 ] 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "subsystem": "sock", 00:17:20.451 "config": [ 00:17:20.451 { 00:17:20.451 "method": "sock_set_default_impl", 00:17:20.451 "params": { 00:17:20.451 "impl_name": "posix" 00:17:20.451 } 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "method": "sock_impl_set_options", 00:17:20.451 "params": { 00:17:20.451 "enable_ktls": false, 00:17:20.451 "enable_placement_id": 0, 00:17:20.451 "enable_quickack": false, 00:17:20.451 "enable_recv_pipe": true, 00:17:20.451 "enable_zerocopy_send_client": false, 00:17:20.451 "enable_zerocopy_send_server": true, 00:17:20.451 "impl_name": "ssl", 00:17:20.451 "recv_buf_size": 4096, 00:17:20.451 "send_buf_size": 4096, 00:17:20.451 "tls_version": 0, 00:17:20.451 "zerocopy_threshold": 0 00:17:20.451 } 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "method": "sock_impl_set_options", 00:17:20.451 "params": { 00:17:20.451 "enable_ktls": false, 00:17:20.451 "enable_placement_id": 0, 00:17:20.451 "enable_quickack": false, 00:17:20.451 "enable_recv_pipe": true, 00:17:20.451 "enable_zerocopy_send_client": false, 00:17:20.451 "enable_zerocopy_send_server": true, 00:17:20.451 "impl_name": "posix", 00:17:20.451 "recv_buf_size": 2097152, 00:17:20.451 "send_buf_size": 2097152, 00:17:20.451 "tls_version": 0, 00:17:20.451 "zerocopy_threshold": 0 00:17:20.451 } 00:17:20.451 } 00:17:20.451 ] 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "subsystem": "vmd", 00:17:20.451 "config": [] 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "subsystem": "accel", 00:17:20.451 "config": [ 00:17:20.451 { 00:17:20.451 "method": "accel_set_options", 00:17:20.451 "params": { 00:17:20.451 "buf_count": 2048, 00:17:20.451 "large_cache_size": 16, 00:17:20.451 "sequence_count": 2048, 00:17:20.451 "small_cache_size": 128, 00:17:20.451 "task_count": 2048 00:17:20.451 } 00:17:20.451 } 00:17:20.451 ] 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "subsystem": "bdev", 00:17:20.451 "config": [ 00:17:20.451 { 00:17:20.451 "method": "bdev_set_options", 00:17:20.451 "params": { 00:17:20.451 "bdev_auto_examine": true, 00:17:20.451 "bdev_io_cache_size": 256, 00:17:20.451 "bdev_io_pool_size": 65535, 00:17:20.451 "iobuf_large_cache_size": 16, 00:17:20.451 "iobuf_small_cache_size": 128 00:17:20.451 } 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "method": "bdev_raid_set_options", 00:17:20.451 "params": { 00:17:20.451 "process_max_bandwidth_mb_sec": 0, 00:17:20.451 "process_window_size_kb": 1024 00:17:20.451 } 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "method": "bdev_iscsi_set_options", 00:17:20.451 "params": { 00:17:20.451 "timeout_sec": 30 00:17:20.451 } 00:17:20.451 }, 00:17:20.451 { 00:17:20.451 "method": "bdev_nvme_set_options", 00:17:20.451 "params": { 00:17:20.451 "action_on_timeout": "none", 00:17:20.451 "allow_accel_sequence": false, 00:17:20.451 "arbitration_burst": 0, 00:17:20.451 "bdev_retry_count": 3, 00:17:20.451 "ctrlr_loss_timeout_sec": 0, 00:17:20.451 "delay_cmd_submit": true, 00:17:20.451 "dhchap_dhgroups": [ 00:17:20.451 "null", 00:17:20.451 "ffdhe2048", 00:17:20.451 "ffdhe3072", 00:17:20.451 "ffdhe4096", 00:17:20.451 "ffdhe6144", 00:17:20.451 "ffdhe8192" 00:17:20.451 ], 00:17:20.451 "dhchap_digests": [ 00:17:20.451 "sha256", 00:17:20.451 "sha384", 00:17:20.451 "sha512" 00:17:20.451 ], 00:17:20.451 "disable_auto_failback": false, 00:17:20.451 "fast_io_fail_timeout_sec": 0, 00:17:20.451 "generate_uuids": false, 00:17:20.451 "high_priority_weight": 0, 00:17:20.451 "io_path_stat": false, 00:17:20.451 "io_queue_requests": 0, 00:17:20.452 "keep_alive_timeout_ms": 10000, 00:17:20.452 "low_priority_weight": 0, 00:17:20.452 "medium_priority_weight": 0, 00:17:20.452 "nvme_adminq_poll_period_us": 10000, 00:17:20.452 "nvme_error_stat": false, 00:17:20.452 "nvme_ioq_poll_period_us": 0, 00:17:20.452 "rdma_cm_event_timeout_ms": 0, 00:17:20.452 "rdma_max_cq_size": 0, 00:17:20.452 "rdma_srq_size": 0, 00:17:20.452 "reconnect_delay_sec": 0, 00:17:20.452 "timeout_admin_us": 0, 00:17:20.452 "timeout_us": 0, 00:17:20.452 "transport_ack_timeout": 0, 00:17:20.452 "transport_retry_count": 4, 00:17:20.452 "transport_tos": 0 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "bdev_nvme_set_hotplug", 00:17:20.452 "params": { 00:17:20.452 "enable": false, 00:17:20.452 "period_us": 100000 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "bdev_malloc_create", 00:17:20.452 "params": { 00:17:20.452 "block_size": 4096, 00:17:20.452 "dif_is_head_of_md": false, 00:17:20.452 "dif_pi_format": 0, 00:17:20.452 "dif_type": 0, 00:17:20.452 "md_size": 0, 00:17:20.452 "name": "malloc0", 00:17:20.452 "num_blocks": 8192, 00:17:20.452 "optimal_io_boundary": 0, 00:17:20.452 "physical_block_size": 4096, 00:17:20.452 "uuid": "43454ad7-e60d-4ed4-a283-8a2d6a055443" 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "bdev_wait_for_examine" 00:17:20.452 } 00:17:20.452 ] 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "subsystem": "nbd", 00:17:20.452 "config": [] 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "subsystem": "scheduler", 00:17:20.452 "config": [ 00:17:20.452 { 00:17:20.452 "method": "framework_set_scheduler", 00:17:20.452 "params": { 00:17:20.452 "name": "static" 00:17:20.452 } 00:17:20.452 } 00:17:20.452 ] 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "subsystem": "nvmf", 00:17:20.452 "config": [ 00:17:20.452 { 00:17:20.452 "method": "nvmf_set_config", 00:17:20.452 "params": { 00:17:20.452 "admin_cmd_passthru": { 00:17:20.452 "identify_ctrlr": false 00:17:20.452 }, 00:17:20.452 "discovery_filter": "match_any" 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "nvmf_set_max_subsystems", 00:17:20.452 "params": { 00:17:20.452 "max_subsystems": 1024 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "nvmf_set_crdt", 00:17:20.452 "params": { 00:17:20.452 "crdt1": 0, 00:17:20.452 "crdt2": 0, 00:17:20.452 "crdt3": 0 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "nvmf_create_transport", 00:17:20.452 "params": { 00:17:20.452 "abort_timeout_sec": 1, 00:17:20.452 "ack_timeout": 0, 00:17:20.452 "buf_cache_size": 4294967295, 00:17:20.452 "c2h_success": false, 00:17:20.452 "data_wr_pool_size": 0, 00:17:20.452 "dif_insert_or_strip": false, 00:17:20.452 "in_capsule_data_size": 4096, 00:17:20.452 "io_unit_size": 131072, 00:17:20.452 "max_aq_depth": 128, 00:17:20.452 "max_io_qpairs_per_ctrlr": 127, 00:17:20.452 "max_io_size": 131072, 00:17:20.452 "max_queue_depth": 128, 00:17:20.452 "num_shared_buffers": 511, 00:17:20.452 "sock_priority": 0, 00:17:20.452 "trtype": "TCP", 00:17:20.452 "zcopy": false 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "nvmf_create_subsystem", 00:17:20.452 "params": { 00:17:20.452 "allow_any_host": false, 00:17:20.452 "ana_reporting": false, 00:17:20.452 "max_cntlid": 65519, 00:17:20.452 "max_namespaces": 32, 00:17:20.452 "min_cntlid": 1, 00:17:20.452 "model_number": "SPDK bdev Controller", 00:17:20.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.452 "serial_number": "00000000000000000000" 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "nvmf_subsystem_add_host", 00:17:20.452 "params": { 00:17:20.452 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.452 "psk": "key0" 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "nvmf_subsystem_add_ns", 00:17:20.452 "params": { 00:17:20.452 "namespace": { 00:17:20.452 "bdev_name": "malloc0", 00:17:20.452 "nguid": "43454AD7E60D4ED4A2838A2D6A055443", 00:17:20.452 "no_auto_visible": false, 00:17:20.452 "nsid": 1, 00:17:20.452 "uuid": "43454ad7-e60d-4ed4-a283-8a2d6a055443" 00:17:20.452 }, 00:17:20.452 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:20.452 } 00:17:20.452 }, 00:17:20.452 { 00:17:20.452 "method": "nvmf_subsystem_add_listener", 00:17:20.452 "params": { 00:17:20.452 "listen_address": { 00:17:20.452 "adrfam": "IPv4", 00:17:20.452 "traddr": "10.0.0.2", 00:17:20.452 "trsvcid": "4420", 00:17:20.452 "trtype": "TCP" 00:17:20.452 }, 00:17:20.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.452 "secure_channel": false, 00:17:20.452 "sock_impl": "ssl" 00:17:20.452 } 00:17:20.452 } 00:17:20.452 ] 00:17:20.452 } 00:17:20.452 ] 00:17:20.452 }' 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85396 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85396 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85396 ']' 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:20.452 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.452 [2024-07-25 07:31:52.967209] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:20.452 [2024-07-25 07:31:52.967294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.452 [2024-07-25 07:31:53.109579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.711 [2024-07-25 07:31:53.213433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.711 [2024-07-25 07:31:53.213485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.711 [2024-07-25 07:31:53.213492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.711 [2024-07-25 07:31:53.213496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.711 [2024-07-25 07:31:53.213500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.711 [2024-07-25 07:31:53.213595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.711 [2024-07-25 07:31:53.433734] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.970 [2024-07-25 07:31:53.465628] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.970 [2024-07-25 07:31:53.465845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=85436 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 85436 /var/tmp/bdevperf.sock 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85436 ']' 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.229 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:17:21.229 "subsystems": [ 00:17:21.229 { 00:17:21.229 "subsystem": "keyring", 00:17:21.229 "config": [ 00:17:21.229 { 00:17:21.229 "method": "keyring_file_add_key", 00:17:21.229 "params": { 00:17:21.229 "name": "key0", 00:17:21.229 "path": "/tmp/tmp.AUDfLvpYTE" 00:17:21.229 } 00:17:21.229 } 00:17:21.229 ] 00:17:21.229 }, 00:17:21.229 { 00:17:21.229 "subsystem": "iobuf", 00:17:21.229 "config": [ 00:17:21.229 { 00:17:21.229 "method": "iobuf_set_options", 00:17:21.229 "params": { 00:17:21.229 "large_bufsize": 135168, 00:17:21.229 "large_pool_count": 1024, 00:17:21.229 "small_bufsize": 8192, 00:17:21.229 "small_pool_count": 8192 00:17:21.229 } 00:17:21.229 } 00:17:21.229 ] 00:17:21.229 }, 00:17:21.229 { 00:17:21.229 "subsystem": "sock", 00:17:21.229 "config": [ 00:17:21.229 { 00:17:21.229 "method": "sock_set_default_impl", 00:17:21.229 "params": { 00:17:21.229 "impl_name": "posix" 00:17:21.229 } 00:17:21.229 }, 00:17:21.229 { 00:17:21.229 "method": "sock_impl_set_options", 00:17:21.229 "params": { 00:17:21.229 "enable_ktls": false, 00:17:21.229 "enable_placement_id": 0, 00:17:21.229 "enable_quickack": false, 00:17:21.229 "enable_recv_pipe": true, 00:17:21.229 "enable_zerocopy_send_client": false, 00:17:21.229 "enable_zerocopy_send_server": true, 00:17:21.229 "impl_name": "ssl", 00:17:21.229 "recv_buf_size": 4096, 00:17:21.229 "send_buf_size": 4096, 00:17:21.229 "tls_version": 0, 00:17:21.229 "zerocopy_threshold": 0 00:17:21.229 } 00:17:21.229 }, 00:17:21.229 { 00:17:21.229 "method": "sock_impl_set_options", 00:17:21.229 "params": { 00:17:21.229 "enable_ktls": false, 00:17:21.229 "enable_placement_id": 0, 00:17:21.229 "enable_quickack": false, 00:17:21.229 "enable_recv_pipe": true, 00:17:21.229 "enable_zerocopy_send_client": false, 00:17:21.229 "enable_zerocopy_send_server": true, 00:17:21.229 "impl_name": "posix", 00:17:21.229 "recv_buf_size": 2097152, 00:17:21.229 "send_buf_size": 2097152, 00:17:21.229 "tls_version": 0, 00:17:21.229 "zerocopy_threshold": 0 00:17:21.229 } 00:17:21.229 } 00:17:21.229 ] 00:17:21.229 }, 00:17:21.229 { 00:17:21.229 "subsystem": "vmd", 00:17:21.229 "config": [] 00:17:21.229 }, 00:17:21.229 { 00:17:21.229 "subsystem": "accel", 00:17:21.229 "config": [ 00:17:21.229 { 00:17:21.229 "method": "accel_set_options", 00:17:21.229 "params": { 00:17:21.229 "buf_count": 2048, 00:17:21.229 "large_cache_size": 16, 00:17:21.229 "sequence_count": 2048, 00:17:21.229 "small_cache_size": 128, 00:17:21.229 "task_count": 2048 00:17:21.229 } 00:17:21.230 } 00:17:21.230 ] 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "subsystem": "bdev", 00:17:21.230 "config": [ 00:17:21.230 { 00:17:21.230 "method": "bdev_set_options", 00:17:21.230 "params": { 00:17:21.230 "bdev_auto_examine": true, 00:17:21.230 "bdev_io_cache_size": 256, 00:17:21.230 "bdev_io_pool_size": 65535, 00:17:21.230 "iobuf_large_cache_size": 16, 00:17:21.230 "iobuf_small_cache_size": 128 00:17:21.230 } 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "method": "bdev_raid_set_options", 00:17:21.230 "params": { 00:17:21.230 "process_max_bandwidth_mb_sec": 0, 00:17:21.230 "process_window_size_kb": 1024 00:17:21.230 } 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "method": "bdev_iscsi_set_options", 00:17:21.230 "params": { 00:17:21.230 "timeout_sec": 30 00:17:21.230 } 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "method": "bdev_nvme_set_options", 00:17:21.230 "params": { 00:17:21.230 "action_on_timeout": "none", 00:17:21.230 "allow_accel_sequence": false, 00:17:21.230 "arbitration_burst": 0, 00:17:21.230 "bdev_retry_count": 3, 00:17:21.230 "ctrlr_loss_timeout_sec": 0, 00:17:21.230 "delay_cmd_submit": true, 00:17:21.230 "dhchap_dhgroups": [ 00:17:21.230 "null", 00:17:21.230 "ffdhe2048", 00:17:21.230 "ffdhe3072", 00:17:21.230 "ffdhe4096", 00:17:21.230 "ffdhe6144", 00:17:21.230 "ffdhe8192" 00:17:21.230 ], 00:17:21.230 "dhchap_digests": [ 00:17:21.230 "sha256", 00:17:21.230 "sha384", 00:17:21.230 "sha512" 00:17:21.230 ], 00:17:21.230 "disable_auto_failback": false, 00:17:21.230 "fast_io_fail_timeout_sec": 0, 00:17:21.230 "generate_uuids": false, 00:17:21.230 "high_priority_weight": 0, 00:17:21.230 "io_path_stat": false, 00:17:21.230 "io_queue_requests": 512, 00:17:21.230 "keep_alive_timeout_ms": 10000, 00:17:21.230 "low_priority_weight": 0, 00:17:21.230 "medium_priority_weight": 0, 00:17:21.230 "nvme_adminq_poll_period_us": 10000, 00:17:21.230 "nvme_error_stat": false, 00:17:21.230 "nvme_ioq_poll_period_us": 0, 00:17:21.230 "rdma_cm_event_timeout_ms": 0, 00:17:21.230 "rdma_max_cq_size": 0, 00:17:21.230 "rdma_srq_size": 0, 00:17:21.230 "reconnect_delay_sec": 0, 00:17:21.230 "timeout_admin_us": 0, 00:17:21.230 "timeout_us": 0, 00:17:21.230 "transport_ack_timeout": 0, 00:17:21.230 "transport_retry_count": 4, 00:17:21.230 "transport_tos": 0 00:17:21.230 } 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "method": "bdev_nvme_attach_controller", 00:17:21.230 "params": { 00:17:21.230 "adrfam": "IPv4", 00:17:21.230 "ctrlr_loss_timeout_sec": 0, 00:17:21.230 "ddgst": false, 00:17:21.230 "fast_io_fail_timeout_sec": 0, 00:17:21.230 "hdgst": false, 00:17:21.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.230 "name": "nvme0", 00:17:21.230 "prchk_guard": false, 00:17:21.230 "prchk_reftag": false, 00:17:21.230 "psk": "key0", 00:17:21.230 "reconnect_delay_sec": 0, 00:17:21.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.230 "traddr": "10.0.0.2", 00:17:21.230 "trsvcid": "4420", 00:17:21.230 "trtype": "TCP" 00:17:21.230 } 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "method": "bdev_nvme_set_hotplug", 00:17:21.230 "params": { 00:17:21.230 "enable": false, 00:17:21.230 "period_us": 100000 00:17:21.230 } 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "method": "bdev_enable_histogram", 00:17:21.230 "params": { 00:17:21.230 "enable": true, 00:17:21.230 "name": "nvme0n1" 00:17:21.230 } 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "method": "bdev_wait_for_examine" 00:17:21.230 } 00:17:21.230 ] 00:17:21.230 }, 00:17:21.230 { 00:17:21.230 "subsystem": "nbd", 00:17:21.230 "config": [] 00:17:21.230 } 00:17:21.230 ] 00:17:21.230 }' 00:17:21.230 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.230 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:21.230 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.230 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.230 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.489 [2024-07-25 07:31:53.968615] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:21.490 [2024-07-25 07:31:53.968687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85436 ] 00:17:21.490 [2024-07-25 07:31:54.105396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.490 [2024-07-25 07:31:54.201459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.749 [2024-07-25 07:31:54.355619] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.317 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.317 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:22.317 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:22.317 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:17:22.576 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.576 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:22.576 Running I/O for 1 seconds... 00:17:23.513 00:17:23.513 Latency(us) 00:17:23.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.513 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:23.513 Verification LBA range: start 0x0 length 0x2000 00:17:23.513 nvme0n1 : 1.01 5268.27 20.58 0.00 0.00 24082.42 5380.25 18544.68 00:17:23.513 =================================================================================================================== 00:17:23.513 Total : 5268.27 20.58 0.00 0.00 24082.42 5380.25 18544.68 00:17:23.513 0 00:17:23.513 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:17:23.513 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:23.774 nvmf_trace.0 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85436 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85436 ']' 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85436 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85436 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:23.774 killing process with pid 85436 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85436' 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85436 00:17:23.774 Received shutdown signal, test time was about 1.000000 seconds 00:17:23.774 00:17:23.774 Latency(us) 00:17:23.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.774 =================================================================================================================== 00:17:23.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.774 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85436 00:17:24.033 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:24.033 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.033 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:24.033 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.034 rmmod nvme_tcp 00:17:24.034 rmmod nvme_fabrics 00:17:24.034 rmmod nvme_keyring 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85396 ']' 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85396 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85396 ']' 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85396 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85396 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:24.034 killing process with pid 85396 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85396' 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85396 00:17:24.034 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85396 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:24.294 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WptzWTj3uH /tmp/tmp.Ed8lTuHUll /tmp/tmp.AUDfLvpYTE 00:17:24.294 00:17:24.294 real 1m24.476s 00:17:24.294 user 2m14.233s 00:17:24.294 sys 0m26.447s 00:17:24.294 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.294 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.294 ************************************ 00:17:24.294 END TEST nvmf_tls 00:17:24.294 ************************************ 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:24.554 ************************************ 00:17:24.554 START TEST nvmf_fips 00:17:24.554 ************************************ 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:24.554 * Looking for test storage... 00:17:24.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.554 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:24.555 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:24.815 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:24.816 Error setting digest 00:17:24.816 009245A4367F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:24.816 009245A4367F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:24.816 Cannot find device "nvmf_tgt_br" 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.816 Cannot find device "nvmf_tgt_br2" 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:24.816 Cannot find device "nvmf_tgt_br" 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:24.816 Cannot find device "nvmf_tgt_br2" 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:17:24.816 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:25.076 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:25.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:25.076 00:17:25.076 --- 10.0.0.2 ping statistics --- 00:17:25.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.076 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:25.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:25.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:17:25.336 00:17:25.336 --- 10.0.0.3 ping statistics --- 00:17:25.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.336 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:25.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:17:25.336 00:17:25.336 --- 10.0.0.1 ping statistics --- 00:17:25.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.336 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85727 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85727 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85727 ']' 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.336 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:25.336 [2024-07-25 07:31:57.940640] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:25.336 [2024-07-25 07:31:57.940818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.594 [2024-07-25 07:31:58.082029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.594 [2024-07-25 07:31:58.185721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.594 [2024-07-25 07:31:58.185870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.594 [2024-07-25 07:31:58.185881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.594 [2024-07-25 07:31:58.185887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.594 [2024-07-25 07:31:58.185891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.594 [2024-07-25 07:31:58.185916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:26.162 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.420 [2024-07-25 07:31:59.056511] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.420 [2024-07-25 07:31:59.072450] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.420 [2024-07-25 07:31:59.072764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.420 [2024-07-25 07:31:59.101408] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:26.420 malloc0 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85785 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85785 /var/tmp/bdevperf.sock 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85785 ']' 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.420 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:26.679 [2024-07-25 07:31:59.209147] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:26.679 [2024-07-25 07:31:59.209342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85785 ] 00:17:26.679 [2024-07-25 07:31:59.348290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.939 [2024-07-25 07:31:59.453291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.509 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.509 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:27.509 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:27.769 [2024-07-25 07:32:00.294202] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.769 [2024-07-25 07:32:00.294307] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:27.769 TLSTESTn1 00:17:27.769 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:27.769 Running I/O for 10 seconds... 00:17:37.756 00:17:37.756 Latency(us) 00:17:37.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.756 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:37.756 Verification LBA range: start 0x0 length 0x2000 00:17:37.756 TLSTESTn1 : 10.01 6067.56 23.70 0.00 0.00 21060.80 4722.03 21749.94 00:17:37.756 =================================================================================================================== 00:17:37.756 Total : 6067.56 23.70 0.00 0.00 21060.80 4722.03 21749.94 00:17:37.756 0 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:38.016 nvmf_trace.0 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85785 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85785 ']' 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85785 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85785 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85785' 00:17:38.016 killing process with pid 85785 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85785 00:17:38.016 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.016 00:17:38.016 Latency(us) 00:17:38.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.016 =================================================================================================================== 00:17:38.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.016 [2024-07-25 07:32:10.639629] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.016 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85785 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.285 rmmod nvme_tcp 00:17:38.285 rmmod nvme_fabrics 00:17:38.285 rmmod nvme_keyring 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85727 ']' 00:17:38.285 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85727 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85727 ']' 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85727 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85727 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:38.286 killing process with pid 85727 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85727' 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85727 00:17:38.286 [2024-07-25 07:32:10.969524] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:38.286 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85727 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:38.545 00:17:38.545 real 0m14.147s 00:17:38.545 user 0m19.513s 00:17:38.545 sys 0m5.368s 00:17:38.545 ************************************ 00:17:38.545 END TEST nvmf_fips 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:38.545 ************************************ 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:17:38.545 00:17:38.545 real 6m15.826s 00:17:38.545 user 15m10.697s 00:17:38.545 sys 1m13.133s 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:38.545 07:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.545 ************************************ 00:17:38.545 END TEST nvmf_target_extra 00:17:38.545 ************************************ 00:17:38.804 07:32:11 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:38.804 07:32:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:38.804 07:32:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.804 07:32:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.804 ************************************ 00:17:38.804 START TEST nvmf_host 00:17:38.804 ************************************ 00:17:38.804 07:32:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:38.804 * Looking for test storage... 00:17:38.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:38.804 07:32:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.804 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:38.804 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.804 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.804 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.804 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.805 ************************************ 00:17:38.805 START TEST nvmf_multicontroller 00:17:38.805 ************************************ 00:17:38.805 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:39.065 * Looking for test storage... 00:17:39.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.065 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:39.066 Cannot find device "nvmf_tgt_br" 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.066 Cannot find device "nvmf_tgt_br2" 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:39.066 Cannot find device "nvmf_tgt_br" 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:39.066 Cannot find device "nvmf_tgt_br2" 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:39.066 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:39.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:17:39.327 00:17:39.327 --- 10.0.0.2 ping statistics --- 00:17:39.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.327 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:39.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:17:39.327 00:17:39.327 --- 10.0.0.3 ping statistics --- 00:17:39.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.327 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:39.327 00:17:39.327 --- 10.0.0.1 ping statistics --- 00:17:39.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.327 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=86168 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 86168 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86168 ']' 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.327 07:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:39.327 [2024-07-25 07:32:12.009535] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:39.327 [2024-07-25 07:32:12.009617] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.586 [2024-07-25 07:32:12.148531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:39.586 [2024-07-25 07:32:12.243514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.586 [2024-07-25 07:32:12.243560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.587 [2024-07-25 07:32:12.243567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.587 [2024-07-25 07:32:12.243573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.587 [2024-07-25 07:32:12.243578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.587 [2024-07-25 07:32:12.243805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.587 [2024-07-25 07:32:12.243729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.587 [2024-07-25 07:32:12.243808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.154 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.154 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:40.154 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.154 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.154 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.412 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.412 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.412 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.412 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 [2024-07-25 07:32:12.907742] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 Malloc0 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 [2024-07-25 07:32:12.967502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 [2024-07-25 07:32:12.979402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 Malloc1 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86228 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86228 /var/tmp/bdevperf.sock 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86228 ']' 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.413 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.348 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.348 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:41.348 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:41.348 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.348 07:32:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.348 NVMe0n1 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.348 1 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:41.348 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.349 2024/07/25 07:32:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:41.349 request: 00:17:41.349 { 00:17:41.349 "method": "bdev_nvme_attach_controller", 00:17:41.349 "params": { 00:17:41.349 "name": "NVMe0", 00:17:41.349 "trtype": "tcp", 00:17:41.349 "traddr": "10.0.0.2", 00:17:41.349 "adrfam": "ipv4", 00:17:41.349 "trsvcid": "4420", 00:17:41.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.349 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:41.349 "hostaddr": "10.0.0.2", 00:17:41.349 "hostsvcid": "60000", 00:17:41.349 "prchk_reftag": false, 00:17:41.349 "prchk_guard": false, 00:17:41.349 "hdgst": false, 00:17:41.349 "ddgst": false 00:17:41.349 } 00:17:41.349 } 00:17:41.349 Got JSON-RPC error response 00:17:41.349 GoRPCClient: error on JSON-RPC call 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:41.349 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:41.608 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.608 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:41.608 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.608 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:41.608 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.608 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.608 request: 00:17:41.608 2024/07/25 07:32:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:41.608 { 00:17:41.608 "method": "bdev_nvme_attach_controller", 00:17:41.608 "params": { 00:17:41.608 "name": "NVMe0", 00:17:41.608 "trtype": "tcp", 00:17:41.608 "traddr": "10.0.0.2", 00:17:41.608 "adrfam": "ipv4", 00:17:41.608 "trsvcid": "4420", 00:17:41.608 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:41.608 "hostaddr": "10.0.0.2", 00:17:41.608 "hostsvcid": "60000", 00:17:41.608 "prchk_reftag": false, 00:17:41.608 "prchk_guard": false, 00:17:41.608 "hdgst": false, 00:17:41.608 "ddgst": false 00:17:41.608 } 00:17:41.608 } 00:17:41.609 Got JSON-RPC error response 00:17:41.609 GoRPCClient: error on JSON-RPC call 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.609 2024/07/25 07:32:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:41.609 request: 00:17:41.609 { 00:17:41.609 "method": "bdev_nvme_attach_controller", 00:17:41.609 "params": { 00:17:41.609 "name": "NVMe0", 00:17:41.609 "trtype": "tcp", 00:17:41.609 "traddr": "10.0.0.2", 00:17:41.609 "adrfam": "ipv4", 00:17:41.609 "trsvcid": "4420", 00:17:41.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.609 "hostaddr": "10.0.0.2", 00:17:41.609 "hostsvcid": "60000", 00:17:41.609 "prchk_reftag": false, 00:17:41.609 "prchk_guard": false, 00:17:41.609 "hdgst": false, 00:17:41.609 "ddgst": false, 00:17:41.609 "multipath": "disable" 00:17:41.609 } 00:17:41.609 } 00:17:41.609 Got JSON-RPC error response 00:17:41.609 GoRPCClient: error on JSON-RPC call 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.609 2024/07/25 07:32:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:41.609 request: 00:17:41.609 { 00:17:41.609 "method": "bdev_nvme_attach_controller", 00:17:41.609 "params": { 00:17:41.609 "name": "NVMe0", 00:17:41.609 "trtype": "tcp", 00:17:41.609 "traddr": "10.0.0.2", 00:17:41.609 "adrfam": "ipv4", 00:17:41.609 "trsvcid": "4420", 00:17:41.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.609 "hostaddr": "10.0.0.2", 00:17:41.609 "hostsvcid": "60000", 00:17:41.609 "prchk_reftag": false, 00:17:41.609 "prchk_guard": false, 00:17:41.609 "hdgst": false, 00:17:41.609 "ddgst": false, 00:17:41.609 "multipath": "failover" 00:17:41.609 } 00:17:41.609 } 00:17:41.609 Got JSON-RPC error response 00:17:41.609 GoRPCClient: error on JSON-RPC call 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.609 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.609 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:41.609 07:32:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.987 0 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86228 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86228 ']' 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86228 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86228 00:17:42.987 killing process with pid 86228 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86228' 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86228 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86228 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:17:42.987 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:42.987 [2024-07-25 07:32:13.107087] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:42.987 [2024-07-25 07:32:13.107463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86228 ] 00:17:42.987 [2024-07-25 07:32:13.244225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.987 [2024-07-25 07:32:13.337290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.987 [2024-07-25 07:32:14.286166] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name d14c6c65-fa9d-4d31-8d3f-6839e480285b already exists 00:17:42.987 [2024-07-25 07:32:14.286211] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:d14c6c65-fa9d-4d31-8d3f-6839e480285b alias for bdev NVMe1n1 00:17:42.987 [2024-07-25 07:32:14.286223] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:42.987 Running I/O for 1 seconds... 00:17:42.987 00:17:42.987 Latency(us) 00:17:42.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.987 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:42.987 NVMe0n1 : 1.00 25152.29 98.25 0.00 0.00 5081.91 1674.17 9558.53 00:17:42.987 =================================================================================================================== 00:17:42.987 Total : 25152.29 98.25 0.00 0.00 5081.91 1674.17 9558.53 00:17:42.987 Received shutdown signal, test time was about 1.000000 seconds 00:17:42.987 00:17:42.987 Latency(us) 00:17:42.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.987 =================================================================================================================== 00:17:42.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.987 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:42.987 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.246 rmmod nvme_tcp 00:17:43.246 rmmod nvme_fabrics 00:17:43.246 rmmod nvme_keyring 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 86168 ']' 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 86168 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86168 ']' 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86168 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86168 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86168' 00:17:43.246 killing process with pid 86168 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86168 00:17:43.246 07:32:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86168 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:43.505 00:17:43.505 real 0m4.654s 00:17:43.505 user 0m14.563s 00:17:43.505 sys 0m1.001s 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.505 ************************************ 00:17:43.505 END TEST nvmf_multicontroller 00:17:43.505 ************************************ 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.505 ************************************ 00:17:43.505 START TEST nvmf_aer 00:17:43.505 ************************************ 00:17:43.505 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:43.765 * Looking for test storage... 00:17:43.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.765 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:43.766 Cannot find device "nvmf_tgt_br" 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # true 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.766 Cannot find device "nvmf_tgt_br2" 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # true 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:43.766 Cannot find device "nvmf_tgt_br" 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # true 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:43.766 Cannot find device "nvmf_tgt_br2" 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # true 00:17:43.766 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:44.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:44.026 00:17:44.026 --- 10.0.0.2 ping statistics --- 00:17:44.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.026 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:44.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:17:44.026 00:17:44.026 --- 10.0.0.3 ping statistics --- 00:17:44.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.026 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:44.026 00:17:44.026 --- 10.0.0.1 ping statistics --- 00:17:44.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.026 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.026 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86474 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86474 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86474 ']' 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.027 07:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:44.286 [2024-07-25 07:32:16.765020] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:44.286 [2024-07-25 07:32:16.765169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.286 [2024-07-25 07:32:16.903512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.286 [2024-07-25 07:32:16.995802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.286 [2024-07-25 07:32:16.995902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.286 [2024-07-25 07:32:16.995955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.286 [2024-07-25 07:32:16.995988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.286 [2024-07-25 07:32:16.996002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.286 [2024-07-25 07:32:16.996253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.286 [2024-07-25 07:32:16.996352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.286 [2024-07-25 07:32:16.996433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.286 [2024-07-25 07:32:16.996439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.221 [2024-07-25 07:32:17.644508] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.221 Malloc0 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.221 [2024-07-25 07:32:17.721699] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.221 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.221 [ 00:17:45.221 { 00:17:45.221 "allow_any_host": true, 00:17:45.221 "hosts": [], 00:17:45.222 "listen_addresses": [], 00:17:45.222 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.222 "subtype": "Discovery" 00:17:45.222 }, 00:17:45.222 { 00:17:45.222 "allow_any_host": true, 00:17:45.222 "hosts": [], 00:17:45.222 "listen_addresses": [ 00:17:45.222 { 00:17:45.222 "adrfam": "IPv4", 00:17:45.222 "traddr": "10.0.0.2", 00:17:45.222 "trsvcid": "4420", 00:17:45.222 "trtype": "TCP" 00:17:45.222 } 00:17:45.222 ], 00:17:45.222 "max_cntlid": 65519, 00:17:45.222 "max_namespaces": 2, 00:17:45.222 "min_cntlid": 1, 00:17:45.222 "model_number": "SPDK bdev Controller", 00:17:45.222 "namespaces": [ 00:17:45.222 { 00:17:45.222 "bdev_name": "Malloc0", 00:17:45.222 "name": "Malloc0", 00:17:45.222 "nguid": "5DB5A2AE086349A88BC4533291B98EF8", 00:17:45.222 "nsid": 1, 00:17:45.222 "uuid": "5db5a2ae-0863-49a8-8bc4-533291b98ef8" 00:17:45.222 } 00:17:45.222 ], 00:17:45.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.222 "serial_number": "SPDK00000000000001", 00:17:45.222 "subtype": "NVMe" 00:17:45.222 } 00:17:45.222 ] 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86536 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:17:45.222 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.481 Malloc1 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.481 07:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.481 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.481 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:45.481 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.481 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.481 Asynchronous Event Request test 00:17:45.481 Attaching to 10.0.0.2 00:17:45.482 Attached to 10.0.0.2 00:17:45.482 Registering asynchronous event callbacks... 00:17:45.482 Starting namespace attribute notice tests for all controllers... 00:17:45.482 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:45.482 aer_cb - Changed Namespace 00:17:45.482 Cleaning up... 00:17:45.482 [ 00:17:45.482 { 00:17:45.482 "allow_any_host": true, 00:17:45.482 "hosts": [], 00:17:45.482 "listen_addresses": [], 00:17:45.482 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.482 "subtype": "Discovery" 00:17:45.482 }, 00:17:45.482 { 00:17:45.482 "allow_any_host": true, 00:17:45.482 "hosts": [], 00:17:45.482 "listen_addresses": [ 00:17:45.482 { 00:17:45.482 "adrfam": "IPv4", 00:17:45.482 "traddr": "10.0.0.2", 00:17:45.482 "trsvcid": "4420", 00:17:45.482 "trtype": "TCP" 00:17:45.482 } 00:17:45.482 ], 00:17:45.482 "max_cntlid": 65519, 00:17:45.482 "max_namespaces": 2, 00:17:45.482 "min_cntlid": 1, 00:17:45.482 "model_number": "SPDK bdev Controller", 00:17:45.482 "namespaces": [ 00:17:45.482 { 00:17:45.482 "bdev_name": "Malloc0", 00:17:45.482 "name": "Malloc0", 00:17:45.482 "nguid": "5DB5A2AE086349A88BC4533291B98EF8", 00:17:45.482 "nsid": 1, 00:17:45.482 "uuid": "5db5a2ae-0863-49a8-8bc4-533291b98ef8" 00:17:45.482 }, 00:17:45.482 { 00:17:45.482 "bdev_name": "Malloc1", 00:17:45.482 "name": "Malloc1", 00:17:45.482 "nguid": "A83BDEDFD98B4268AD2FE77BD8304A2E", 00:17:45.482 "nsid": 2, 00:17:45.482 "uuid": "a83bdedf-d98b-4268-ad2f-e77bd8304a2e" 00:17:45.482 } 00:17:45.482 ], 00:17:45.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.482 "serial_number": "SPDK00000000000001", 00:17:45.482 "subtype": "NVMe" 00:17:45.482 } 00:17:45.482 ] 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86536 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.482 rmmod nvme_tcp 00:17:45.482 rmmod nvme_fabrics 00:17:45.482 rmmod nvme_keyring 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86474 ']' 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86474 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86474 ']' 00:17:45.482 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86474 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86474 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:45.742 killing process with pid 86474 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86474' 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86474 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86474 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.742 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:46.002 00:17:46.002 real 0m2.265s 00:17:46.002 user 0m5.947s 00:17:46.002 sys 0m0.641s 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 ************************************ 00:17:46.002 END TEST nvmf_aer 00:17:46.002 ************************************ 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 ************************************ 00:17:46.002 START TEST nvmf_async_init 00:17:46.002 ************************************ 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:46.002 * Looking for test storage... 00:17:46.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.002 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ffe399f913f24839830771b550056cda 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:46.003 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:46.263 Cannot find device "nvmf_tgt_br" 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.263 Cannot find device "nvmf_tgt_br2" 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:46.263 Cannot find device "nvmf_tgt_br" 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:46.263 Cannot find device "nvmf_tgt_br2" 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.263 07:32:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:46.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:17:46.523 00:17:46.523 --- 10.0.0.2 ping statistics --- 00:17:46.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.523 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:46.523 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:46.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:46.523 00:17:46.524 --- 10.0.0.3 ping statistics --- 00:17:46.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.524 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:46.524 00:17:46.524 --- 10.0.0.1 ping statistics --- 00:17:46.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.524 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86701 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86701 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86701 ']' 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.524 07:32:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:46.524 [2024-07-25 07:32:19.204297] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:46.524 [2024-07-25 07:32:19.204357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.783 [2024-07-25 07:32:19.341202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.783 [2024-07-25 07:32:19.435243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.783 [2024-07-25 07:32:19.435285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.783 [2024-07-25 07:32:19.435291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.783 [2024-07-25 07:32:19.435295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.783 [2024-07-25 07:32:19.435299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.783 [2024-07-25 07:32:19.435335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.353 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 [2024-07-25 07:32:20.087890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 null0 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ffe399f913f24839830771b550056cda 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 [2024-07-25 07:32:20.147832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.612 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.873 nvme0n1 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.873 [ 00:17:47.873 { 00:17:47.873 "aliases": [ 00:17:47.873 "ffe399f9-13f2-4839-8307-71b550056cda" 00:17:47.873 ], 00:17:47.873 "assigned_rate_limits": { 00:17:47.873 "r_mbytes_per_sec": 0, 00:17:47.873 "rw_ios_per_sec": 0, 00:17:47.873 "rw_mbytes_per_sec": 0, 00:17:47.873 "w_mbytes_per_sec": 0 00:17:47.873 }, 00:17:47.873 "block_size": 512, 00:17:47.873 "claimed": false, 00:17:47.873 "driver_specific": { 00:17:47.873 "mp_policy": "active_passive", 00:17:47.873 "nvme": [ 00:17:47.873 { 00:17:47.873 "ctrlr_data": { 00:17:47.873 "ana_reporting": false, 00:17:47.873 "cntlid": 1, 00:17:47.873 "firmware_revision": "24.09", 00:17:47.873 "model_number": "SPDK bdev Controller", 00:17:47.873 "multi_ctrlr": true, 00:17:47.873 "oacs": { 00:17:47.873 "firmware": 0, 00:17:47.873 "format": 0, 00:17:47.873 "ns_manage": 0, 00:17:47.873 "security": 0 00:17:47.873 }, 00:17:47.873 "serial_number": "00000000000000000000", 00:17:47.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.873 "vendor_id": "0x8086" 00:17:47.873 }, 00:17:47.873 "ns_data": { 00:17:47.873 "can_share": true, 00:17:47.873 "id": 1 00:17:47.873 }, 00:17:47.873 "trid": { 00:17:47.873 "adrfam": "IPv4", 00:17:47.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.873 "traddr": "10.0.0.2", 00:17:47.873 "trsvcid": "4420", 00:17:47.873 "trtype": "TCP" 00:17:47.873 }, 00:17:47.873 "vs": { 00:17:47.873 "nvme_version": "1.3" 00:17:47.873 } 00:17:47.873 } 00:17:47.873 ] 00:17:47.873 }, 00:17:47.873 "memory_domains": [ 00:17:47.873 { 00:17:47.873 "dma_device_id": "system", 00:17:47.873 "dma_device_type": 1 00:17:47.873 } 00:17:47.873 ], 00:17:47.873 "name": "nvme0n1", 00:17:47.873 "num_blocks": 2097152, 00:17:47.873 "product_name": "NVMe disk", 00:17:47.873 "supported_io_types": { 00:17:47.873 "abort": true, 00:17:47.873 "compare": true, 00:17:47.873 "compare_and_write": true, 00:17:47.873 "copy": true, 00:17:47.873 "flush": true, 00:17:47.873 "get_zone_info": false, 00:17:47.873 "nvme_admin": true, 00:17:47.873 "nvme_io": true, 00:17:47.873 "nvme_io_md": false, 00:17:47.873 "nvme_iov_md": false, 00:17:47.873 "read": true, 00:17:47.873 "reset": true, 00:17:47.873 "seek_data": false, 00:17:47.873 "seek_hole": false, 00:17:47.873 "unmap": false, 00:17:47.873 "write": true, 00:17:47.873 "write_zeroes": true, 00:17:47.873 "zcopy": false, 00:17:47.873 "zone_append": false, 00:17:47.873 "zone_management": false 00:17:47.873 }, 00:17:47.873 "uuid": "ffe399f9-13f2-4839-8307-71b550056cda", 00:17:47.873 "zoned": false 00:17:47.873 } 00:17:47.873 ] 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.873 [2024-07-25 07:32:20.424798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:47.873 [2024-07-25 07:32:20.424865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1ab00 (9): Bad file descriptor 00:17:47.873 [2024-07-25 07:32:20.556228] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.873 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.873 [ 00:17:47.873 { 00:17:47.874 "aliases": [ 00:17:47.874 "ffe399f9-13f2-4839-8307-71b550056cda" 00:17:47.874 ], 00:17:47.874 "assigned_rate_limits": { 00:17:47.874 "r_mbytes_per_sec": 0, 00:17:47.874 "rw_ios_per_sec": 0, 00:17:47.874 "rw_mbytes_per_sec": 0, 00:17:47.874 "w_mbytes_per_sec": 0 00:17:47.874 }, 00:17:47.874 "block_size": 512, 00:17:47.874 "claimed": false, 00:17:47.874 "driver_specific": { 00:17:47.874 "mp_policy": "active_passive", 00:17:47.874 "nvme": [ 00:17:47.874 { 00:17:47.874 "ctrlr_data": { 00:17:47.874 "ana_reporting": false, 00:17:47.874 "cntlid": 2, 00:17:47.874 "firmware_revision": "24.09", 00:17:47.874 "model_number": "SPDK bdev Controller", 00:17:47.874 "multi_ctrlr": true, 00:17:47.874 "oacs": { 00:17:47.874 "firmware": 0, 00:17:47.874 "format": 0, 00:17:47.874 "ns_manage": 0, 00:17:47.874 "security": 0 00:17:47.874 }, 00:17:47.874 "serial_number": "00000000000000000000", 00:17:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.874 "vendor_id": "0x8086" 00:17:47.874 }, 00:17:47.874 "ns_data": { 00:17:47.874 "can_share": true, 00:17:47.874 "id": 1 00:17:47.874 }, 00:17:47.874 "trid": { 00:17:47.874 "adrfam": "IPv4", 00:17:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.874 "traddr": "10.0.0.2", 00:17:47.874 "trsvcid": "4420", 00:17:47.874 "trtype": "TCP" 00:17:47.874 }, 00:17:47.874 "vs": { 00:17:47.874 "nvme_version": "1.3" 00:17:47.874 } 00:17:47.874 } 00:17:47.874 ] 00:17:47.874 }, 00:17:47.874 "memory_domains": [ 00:17:47.874 { 00:17:47.874 "dma_device_id": "system", 00:17:47.874 "dma_device_type": 1 00:17:47.874 } 00:17:47.874 ], 00:17:47.874 "name": "nvme0n1", 00:17:47.874 "num_blocks": 2097152, 00:17:47.874 "product_name": "NVMe disk", 00:17:47.874 "supported_io_types": { 00:17:47.874 "abort": true, 00:17:47.874 "compare": true, 00:17:47.874 "compare_and_write": true, 00:17:47.874 "copy": true, 00:17:47.874 "flush": true, 00:17:47.874 "get_zone_info": false, 00:17:47.874 "nvme_admin": true, 00:17:47.874 "nvme_io": true, 00:17:47.874 "nvme_io_md": false, 00:17:47.874 "nvme_iov_md": false, 00:17:47.874 "read": true, 00:17:47.874 "reset": true, 00:17:47.874 "seek_data": false, 00:17:47.874 "seek_hole": false, 00:17:47.874 "unmap": false, 00:17:47.874 "write": true, 00:17:47.874 "write_zeroes": true, 00:17:47.874 "zcopy": false, 00:17:47.874 "zone_append": false, 00:17:47.874 "zone_management": false 00:17:47.874 }, 00:17:47.874 "uuid": "ffe399f9-13f2-4839-8307-71b550056cda", 00:17:47.874 "zoned": false 00:17:47.874 } 00:17:47.874 ] 00:17:47.874 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.874 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.874 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.874 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.874 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.874 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UtODrWGa2G 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UtODrWGa2G 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.134 [2024-07-25 07:32:20.632540] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.134 [2024-07-25 07:32:20.632672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtODrWGa2G 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.134 [2024-07-25 07:32:20.644514] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtODrWGa2G 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.134 [2024-07-25 07:32:20.656496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.134 [2024-07-25 07:32:20.656539] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:48.134 nvme0n1 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.134 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.134 [ 00:17:48.134 { 00:17:48.134 "aliases": [ 00:17:48.134 "ffe399f9-13f2-4839-8307-71b550056cda" 00:17:48.134 ], 00:17:48.134 "assigned_rate_limits": { 00:17:48.134 "r_mbytes_per_sec": 0, 00:17:48.134 "rw_ios_per_sec": 0, 00:17:48.134 "rw_mbytes_per_sec": 0, 00:17:48.134 "w_mbytes_per_sec": 0 00:17:48.134 }, 00:17:48.134 "block_size": 512, 00:17:48.134 "claimed": false, 00:17:48.134 "driver_specific": { 00:17:48.134 "mp_policy": "active_passive", 00:17:48.134 "nvme": [ 00:17:48.134 { 00:17:48.134 "ctrlr_data": { 00:17:48.134 "ana_reporting": false, 00:17:48.134 "cntlid": 3, 00:17:48.134 "firmware_revision": "24.09", 00:17:48.135 "model_number": "SPDK bdev Controller", 00:17:48.135 "multi_ctrlr": true, 00:17:48.135 "oacs": { 00:17:48.135 "firmware": 0, 00:17:48.135 "format": 0, 00:17:48.135 "ns_manage": 0, 00:17:48.135 "security": 0 00:17:48.135 }, 00:17:48.135 "serial_number": "00000000000000000000", 00:17:48.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.135 "vendor_id": "0x8086" 00:17:48.135 }, 00:17:48.135 "ns_data": { 00:17:48.135 "can_share": true, 00:17:48.135 "id": 1 00:17:48.135 }, 00:17:48.135 "trid": { 00:17:48.135 "adrfam": "IPv4", 00:17:48.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.135 "traddr": "10.0.0.2", 00:17:48.135 "trsvcid": "4421", 00:17:48.135 "trtype": "TCP" 00:17:48.135 }, 00:17:48.135 "vs": { 00:17:48.135 "nvme_version": "1.3" 00:17:48.135 } 00:17:48.135 } 00:17:48.135 ] 00:17:48.135 }, 00:17:48.135 "memory_domains": [ 00:17:48.135 { 00:17:48.135 "dma_device_id": "system", 00:17:48.135 "dma_device_type": 1 00:17:48.135 } 00:17:48.135 ], 00:17:48.135 "name": "nvme0n1", 00:17:48.135 "num_blocks": 2097152, 00:17:48.135 "product_name": "NVMe disk", 00:17:48.135 "supported_io_types": { 00:17:48.135 "abort": true, 00:17:48.135 "compare": true, 00:17:48.135 "compare_and_write": true, 00:17:48.135 "copy": true, 00:17:48.135 "flush": true, 00:17:48.135 "get_zone_info": false, 00:17:48.135 "nvme_admin": true, 00:17:48.135 "nvme_io": true, 00:17:48.135 "nvme_io_md": false, 00:17:48.135 "nvme_iov_md": false, 00:17:48.135 "read": true, 00:17:48.135 "reset": true, 00:17:48.135 "seek_data": false, 00:17:48.135 "seek_hole": false, 00:17:48.135 "unmap": false, 00:17:48.135 "write": true, 00:17:48.135 "write_zeroes": true, 00:17:48.135 "zcopy": false, 00:17:48.135 "zone_append": false, 00:17:48.135 "zone_management": false 00:17:48.135 }, 00:17:48.135 "uuid": "ffe399f9-13f2-4839-8307-71b550056cda", 00:17:48.135 "zoned": false 00:17:48.135 } 00:17:48.135 ] 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.UtODrWGa2G 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.135 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.135 rmmod nvme_tcp 00:17:48.135 rmmod nvme_fabrics 00:17:48.135 rmmod nvme_keyring 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86701 ']' 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86701 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86701 ']' 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86701 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86701 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:48.397 killing process with pid 86701 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86701' 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86701 00:17:48.397 [2024-07-25 07:32:20.913196] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:48.397 [2024-07-25 07:32:20.913225] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:48.397 07:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86701 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.397 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:48.659 00:17:48.659 real 0m2.580s 00:17:48.659 user 0m2.239s 00:17:48.659 sys 0m0.679s 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.659 ************************************ 00:17:48.659 END TEST nvmf_async_init 00:17:48.659 ************************************ 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.659 ************************************ 00:17:48.659 START TEST dma 00:17:48.659 ************************************ 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:48.659 * Looking for test storage... 00:17:48.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:17:48.659 00:17:48.659 real 0m0.146s 00:17:48.659 user 0m0.061s 00:17:48.659 sys 0m0.096s 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.659 07:32:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:48.659 ************************************ 00:17:48.659 END TEST dma 00:17:48.659 ************************************ 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.919 ************************************ 00:17:48.919 START TEST nvmf_identify 00:17:48.919 ************************************ 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:48.919 * Looking for test storage... 00:17:48.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:48.919 Cannot find device "nvmf_tgt_br" 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.919 Cannot find device "nvmf_tgt_br2" 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:48.919 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:48.920 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:48.920 Cannot find device "nvmf_tgt_br" 00:17:48.920 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:48.920 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:49.179 Cannot find device "nvmf_tgt_br2" 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.179 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:49.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:49.439 00:17:49.439 --- 10.0.0.2 ping statistics --- 00:17:49.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.439 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:49.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:49.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:17:49.439 00:17:49.439 --- 10.0.0.3 ping statistics --- 00:17:49.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.439 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:49.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:49.439 00:17:49.439 --- 10.0.0.1 ping statistics --- 00:17:49.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.439 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86974 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86974 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86974 ']' 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.439 07:32:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.439 [2024-07-25 07:32:21.998048] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:49.439 [2024-07-25 07:32:21.998113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.439 [2024-07-25 07:32:22.137956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.699 [2024-07-25 07:32:22.229384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.699 [2024-07-25 07:32:22.229424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.699 [2024-07-25 07:32:22.229430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.699 [2024-07-25 07:32:22.229434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.699 [2024-07-25 07:32:22.229438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.699 [2024-07-25 07:32:22.229661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.699 [2024-07-25 07:32:22.229788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.699 [2024-07-25 07:32:22.229994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.699 [2024-07-25 07:32:22.229998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.268 [2024-07-25 07:32:22.884041] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.268 Malloc0 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.268 07:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.529 [2024-07-25 07:32:23.024580] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.529 [ 00:17:50.529 { 00:17:50.529 "allow_any_host": true, 00:17:50.529 "hosts": [], 00:17:50.529 "listen_addresses": [ 00:17:50.529 { 00:17:50.529 "adrfam": "IPv4", 00:17:50.529 "traddr": "10.0.0.2", 00:17:50.529 "trsvcid": "4420", 00:17:50.529 "trtype": "TCP" 00:17:50.529 } 00:17:50.529 ], 00:17:50.529 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:50.529 "subtype": "Discovery" 00:17:50.529 }, 00:17:50.529 { 00:17:50.529 "allow_any_host": true, 00:17:50.529 "hosts": [], 00:17:50.529 "listen_addresses": [ 00:17:50.529 { 00:17:50.529 "adrfam": "IPv4", 00:17:50.529 "traddr": "10.0.0.2", 00:17:50.529 "trsvcid": "4420", 00:17:50.529 "trtype": "TCP" 00:17:50.529 } 00:17:50.529 ], 00:17:50.529 "max_cntlid": 65519, 00:17:50.529 "max_namespaces": 32, 00:17:50.529 "min_cntlid": 1, 00:17:50.529 "model_number": "SPDK bdev Controller", 00:17:50.529 "namespaces": [ 00:17:50.529 { 00:17:50.529 "bdev_name": "Malloc0", 00:17:50.529 "eui64": "ABCDEF0123456789", 00:17:50.529 "name": "Malloc0", 00:17:50.529 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:50.529 "nsid": 1, 00:17:50.529 "uuid": "ea9e2daf-b8b2-4a44-a62f-8c1ee1ba071c" 00:17:50.529 } 00:17:50.529 ], 00:17:50.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.529 "serial_number": "SPDK00000000000001", 00:17:50.529 "subtype": "NVMe" 00:17:50.529 } 00:17:50.529 ] 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.529 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:50.529 [2024-07-25 07:32:23.093506] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:50.529 [2024-07-25 07:32:23.093556] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87036 ] 00:17:50.529 [2024-07-25 07:32:23.222366] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:50.529 [2024-07-25 07:32:23.222424] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:50.529 [2024-07-25 07:32:23.222429] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:50.529 [2024-07-25 07:32:23.222440] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:50.529 [2024-07-25 07:32:23.222448] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:50.529 [2024-07-25 07:32:23.222557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:50.529 [2024-07-25 07:32:23.222588] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1653a60 0 00:17:50.529 [2024-07-25 07:32:23.238127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:50.529 [2024-07-25 07:32:23.238142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:50.529 [2024-07-25 07:32:23.238146] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:50.529 [2024-07-25 07:32:23.238148] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:50.529 [2024-07-25 07:32:23.238182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.529 [2024-07-25 07:32:23.238186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.529 [2024-07-25 07:32:23.238189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.529 [2024-07-25 07:32:23.238200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.529 [2024-07-25 07:32:23.238221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.529 [2024-07-25 07:32:23.246126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.529 [2024-07-25 07:32:23.246138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.529 [2024-07-25 07:32:23.246141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.529 [2024-07-25 07:32:23.246144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.529 [2024-07-25 07:32:23.246151] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:50.529 [2024-07-25 07:32:23.246157] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:50.529 [2024-07-25 07:32:23.246160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:50.529 [2024-07-25 07:32:23.246171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.529 [2024-07-25 07:32:23.246174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.530 [2024-07-25 07:32:23.246199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.246246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.530 [2024-07-25 07:32:23.246251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.530 [2024-07-25 07:32:23.246253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.530 [2024-07-25 07:32:23.246275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:50.530 [2024-07-25 07:32:23.246280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:50.530 [2024-07-25 07:32:23.246284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.530 [2024-07-25 07:32:23.246306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.246342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.530 [2024-07-25 07:32:23.246346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.530 [2024-07-25 07:32:23.246348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.530 [2024-07-25 07:32:23.246354] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:50.530 [2024-07-25 07:32:23.246360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:50.530 [2024-07-25 07:32:23.246364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.530 [2024-07-25 07:32:23.246384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.246435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.530 [2024-07-25 07:32:23.246439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.530 [2024-07-25 07:32:23.246441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.530 [2024-07-25 07:32:23.246447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:50.530 [2024-07-25 07:32:23.246453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.530 [2024-07-25 07:32:23.246474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.246512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.530 [2024-07-25 07:32:23.246516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.530 [2024-07-25 07:32:23.246518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.530 [2024-07-25 07:32:23.246525] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:50.530 [2024-07-25 07:32:23.246528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:50.530 [2024-07-25 07:32:23.246532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:50.530 [2024-07-25 07:32:23.246636] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:50.530 [2024-07-25 07:32:23.246646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:50.530 [2024-07-25 07:32:23.246652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.530 [2024-07-25 07:32:23.246672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.246718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.530 [2024-07-25 07:32:23.246722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.530 [2024-07-25 07:32:23.246724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.530 [2024-07-25 07:32:23.246730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:50.530 [2024-07-25 07:32:23.246736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.530 [2024-07-25 07:32:23.246756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.246791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.530 [2024-07-25 07:32:23.246796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.530 [2024-07-25 07:32:23.246798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.530 [2024-07-25 07:32:23.246803] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:50.530 [2024-07-25 07:32:23.246806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:50.530 [2024-07-25 07:32:23.246811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:50.530 [2024-07-25 07:32:23.246818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:50.530 [2024-07-25 07:32:23.246824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.530 [2024-07-25 07:32:23.246842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.246910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.530 [2024-07-25 07:32:23.246915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.530 [2024-07-25 07:32:23.246917] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246919] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653a60): datao=0, datal=4096, cccid=0 00:17:50.530 [2024-07-25 07:32:23.246922] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1696840) on tqpair(0x1653a60): expected_datao=0, payload_size=4096 00:17:50.530 [2024-07-25 07:32:23.246925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246931] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246934] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.530 [2024-07-25 07:32:23.246945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.530 [2024-07-25 07:32:23.246947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.530 [2024-07-25 07:32:23.246955] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:50.530 [2024-07-25 07:32:23.246958] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:50.530 [2024-07-25 07:32:23.246960] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:50.530 [2024-07-25 07:32:23.246966] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:50.530 [2024-07-25 07:32:23.246969] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:50.530 [2024-07-25 07:32:23.246972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:50.530 [2024-07-25 07:32:23.246978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:50.530 [2024-07-25 07:32:23.246982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.530 [2024-07-25 07:32:23.246987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.530 [2024-07-25 07:32:23.246992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.530 [2024-07-25 07:32:23.247003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.530 [2024-07-25 07:32:23.247048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.531 [2024-07-25 07:32:23.247053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.531 [2024-07-25 07:32:23.247055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.531 [2024-07-25 07:32:23.247063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.531 [2024-07-25 07:32:23.247076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.531 [2024-07-25 07:32:23.247088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.531 [2024-07-25 07:32:23.247101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.531 [2024-07-25 07:32:23.247112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:50.531 [2024-07-25 07:32:23.247126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:50.531 [2024-07-25 07:32:23.247131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.531 [2024-07-25 07:32:23.247153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696840, cid 0, qid 0 00:17:50.531 [2024-07-25 07:32:23.247157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16969c0, cid 1, qid 0 00:17:50.531 [2024-07-25 07:32:23.247160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696b40, cid 2, qid 0 00:17:50.531 [2024-07-25 07:32:23.247163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.531 [2024-07-25 07:32:23.247167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696e40, cid 4, qid 0 00:17:50.531 [2024-07-25 07:32:23.247243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.531 [2024-07-25 07:32:23.247247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.531 [2024-07-25 07:32:23.247249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696e40) on tqpair=0x1653a60 00:17:50.531 [2024-07-25 07:32:23.247255] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:50.531 [2024-07-25 07:32:23.247258] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:50.531 [2024-07-25 07:32:23.247265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.531 [2024-07-25 07:32:23.247283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696e40, cid 4, qid 0 00:17:50.531 [2024-07-25 07:32:23.247325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.531 [2024-07-25 07:32:23.247329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.531 [2024-07-25 07:32:23.247332] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247334] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653a60): datao=0, datal=4096, cccid=4 00:17:50.531 [2024-07-25 07:32:23.247336] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1696e40) on tqpair(0x1653a60): expected_datao=0, payload_size=4096 00:17:50.531 [2024-07-25 07:32:23.247339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247344] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247346] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.531 [2024-07-25 07:32:23.247356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.531 [2024-07-25 07:32:23.247358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696e40) on tqpair=0x1653a60 00:17:50.531 [2024-07-25 07:32:23.247369] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:50.531 [2024-07-25 07:32:23.247387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.531 [2024-07-25 07:32:23.247399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1653a60) 00:17:50.531 [2024-07-25 07:32:23.247408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.531 [2024-07-25 07:32:23.247423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696e40, cid 4, qid 0 00:17:50.531 [2024-07-25 07:32:23.247427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696fc0, cid 5, qid 0 00:17:50.531 [2024-07-25 07:32:23.247504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.531 [2024-07-25 07:32:23.247508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.531 [2024-07-25 07:32:23.247511] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247513] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653a60): datao=0, datal=1024, cccid=4 00:17:50.531 [2024-07-25 07:32:23.247515] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1696e40) on tqpair(0x1653a60): expected_datao=0, payload_size=1024 00:17:50.531 [2024-07-25 07:32:23.247518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247523] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247525] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.531 [2024-07-25 07:32:23.247533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.531 [2024-07-25 07:32:23.247535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.531 [2024-07-25 07:32:23.247537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696fc0) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.288185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.791 [2024-07-25 07:32:23.288204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.791 [2024-07-25 07:32:23.288208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696e40) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.288223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653a60) 00:17:50.791 [2024-07-25 07:32:23.288233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.791 [2024-07-25 07:32:23.288255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696e40, cid 4, qid 0 00:17:50.791 [2024-07-25 07:32:23.288325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.791 [2024-07-25 07:32:23.288330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.791 [2024-07-25 07:32:23.288332] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288334] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653a60): datao=0, datal=3072, cccid=4 00:17:50.791 [2024-07-25 07:32:23.288337] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1696e40) on tqpair(0x1653a60): expected_datao=0, payload_size=3072 00:17:50.791 [2024-07-25 07:32:23.288339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288347] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288349] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.791 [2024-07-25 07:32:23.288359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.791 [2024-07-25 07:32:23.288361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696e40) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.288369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653a60) 00:17:50.791 [2024-07-25 07:32:23.288376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.791 [2024-07-25 07:32:23.288391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696e40, cid 4, qid 0 00:17:50.791 [2024-07-25 07:32:23.288438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.791 [2024-07-25 07:32:23.288442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.791 [2024-07-25 07:32:23.288444] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288447] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653a60): datao=0, datal=8, cccid=4 00:17:50.791 [2024-07-25 07:32:23.288449] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1696e40) on tqpair(0x1653a60): expected_datao=0, payload_size=8 00:17:50.791 [2024-07-25 07:32:23.288451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288456] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.288458] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.791 ===================================================== 00:17:50.791 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:50.791 ===================================================== 00:17:50.791 Controller Capabilities/Features 00:17:50.791 ================================ 00:17:50.791 Vendor ID: 0000 00:17:50.791 Subsystem Vendor ID: 0000 00:17:50.791 Serial Number: .................... 00:17:50.791 Model Number: ........................................ 00:17:50.791 Firmware Version: 24.09 00:17:50.791 Recommended Arb Burst: 0 00:17:50.791 IEEE OUI Identifier: 00 00 00 00:17:50.791 Multi-path I/O 00:17:50.791 May have multiple subsystem ports: No 00:17:50.791 May have multiple controllers: No 00:17:50.791 Associated with SR-IOV VF: No 00:17:50.791 Max Data Transfer Size: 131072 00:17:50.791 Max Number of Namespaces: 0 00:17:50.791 Max Number of I/O Queues: 1024 00:17:50.791 NVMe Specification Version (VS): 1.3 00:17:50.791 NVMe Specification Version (Identify): 1.3 00:17:50.791 Maximum Queue Entries: 128 00:17:50.791 Contiguous Queues Required: Yes 00:17:50.791 Arbitration Mechanisms Supported 00:17:50.791 Weighted Round Robin: Not Supported 00:17:50.791 Vendor Specific: Not Supported 00:17:50.791 Reset Timeout: 15000 ms 00:17:50.791 Doorbell Stride: 4 bytes 00:17:50.791 NVM Subsystem Reset: Not Supported 00:17:50.791 Command Sets Supported 00:17:50.791 NVM Command Set: Supported 00:17:50.791 Boot Partition: Not Supported 00:17:50.791 Memory Page Size Minimum: 4096 bytes 00:17:50.791 Memory Page Size Maximum: 4096 bytes 00:17:50.791 Persistent Memory Region: Not Supported 00:17:50.791 Optional Asynchronous Events Supported 00:17:50.791 Namespace Attribute Notices: Not Supported 00:17:50.791 Firmware Activation Notices: Not Supported 00:17:50.791 ANA Change Notices: Not Supported 00:17:50.791 PLE Aggregate Log Change Notices: Not Supported 00:17:50.791 LBA Status Info Alert Notices: Not Supported 00:17:50.791 EGE Aggregate Log Change Notices: Not Supported 00:17:50.791 Normal NVM Subsystem Shutdown event: Not Supported 00:17:50.791 Zone Descriptor Change Notices: Not Supported 00:17:50.791 Discovery Log Change Notices: Supported 00:17:50.791 Controller Attributes 00:17:50.791 128-bit Host Identifier: Not Supported 00:17:50.791 Non-Operational Permissive Mode: Not Supported 00:17:50.791 NVM Sets: Not Supported 00:17:50.791 Read Recovery Levels: Not Supported 00:17:50.791 Endurance Groups: Not Supported 00:17:50.791 Predictable Latency Mode: Not Supported 00:17:50.791 Traffic Based Keep ALive: Not Supported 00:17:50.791 Namespace Granularity: Not Supported 00:17:50.791 SQ Associations: Not Supported 00:17:50.791 UUID List: Not Supported 00:17:50.791 Multi-Domain Subsystem: Not Supported 00:17:50.791 Fixed Capacity Management: Not Supported 00:17:50.791 Variable Capacity Management: Not Supported 00:17:50.791 Delete Endurance Group: Not Supported 00:17:50.791 Delete NVM Set: Not Supported 00:17:50.791 Extended LBA Formats Supported: Not Supported 00:17:50.791 Flexible Data Placement Supported: Not Supported 00:17:50.791 00:17:50.791 Controller Memory Buffer Support 00:17:50.791 ================================ 00:17:50.791 Supported: No 00:17:50.791 00:17:50.791 Persistent Memory Region Support 00:17:50.791 ================================ 00:17:50.791 Supported: No 00:17:50.791 00:17:50.791 Admin Command Set Attributes 00:17:50.791 ============================ 00:17:50.791 Security Send/Receive: Not Supported 00:17:50.791 Format NVM: Not Supported 00:17:50.791 Firmware Activate/Download: Not Supported 00:17:50.791 Namespace Management: Not Supported 00:17:50.791 Device Self-Test: Not Supported 00:17:50.791 Directives: Not Supported 00:17:50.791 NVMe-MI: Not Supported 00:17:50.791 Virtualization Management: Not Supported 00:17:50.791 Doorbell Buffer Config: Not Supported 00:17:50.791 Get LBA Status Capability: Not Supported 00:17:50.791 Command & Feature Lockdown Capability: Not Supported 00:17:50.791 Abort Command Limit: 1 00:17:50.791 Async Event Request Limit: 4 00:17:50.791 Number of Firmware Slots: N/A 00:17:50.791 Firmware Slot 1 Read-Only: N/A 00:17:50.791 Firmware Activation Without Reset: N/A 00:17:50.791 Multiple Update Detection Support: N/A 00:17:50.791 Firmware Update Granularity: No Information Provided 00:17:50.791 Per-Namespace SMART Log: No 00:17:50.791 Asymmetric Namespace Access Log Page: Not Supported 00:17:50.791 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:50.791 Command Effects Log Page: Not Supported 00:17:50.791 Get Log Page Extended Data: Supported 00:17:50.791 Telemetry Log Pages: Not Supported 00:17:50.791 Persistent Event Log Pages: Not Supported 00:17:50.791 Supported Log Pages Log Page: May Support 00:17:50.791 Commands Supported & Effects Log Page: Not Supported 00:17:50.791 Feature Identifiers & Effects Log Page:May Support 00:17:50.791 NVMe-MI Commands & Effects Log Page: May Support 00:17:50.791 Data Area 4 for Telemetry Log: Not Supported 00:17:50.791 Error Log Page Entries Supported: 128 00:17:50.791 Keep Alive: Not Supported 00:17:50.791 00:17:50.791 NVM Command Set Attributes 00:17:50.791 ========================== 00:17:50.791 Submission Queue Entry Size 00:17:50.791 Max: 1 00:17:50.791 Min: 1 00:17:50.791 Completion Queue Entry Size 00:17:50.791 Max: 1 00:17:50.791 Min: 1 00:17:50.791 Number of Namespaces: 0 00:17:50.791 Compare Command: Not Supported 00:17:50.791 Write Uncorrectable Command: Not Supported 00:17:50.791 Dataset Management Command: Not Supported 00:17:50.791 Write Zeroes Command: Not Supported 00:17:50.791 Set Features Save Field: Not Supported 00:17:50.791 Reservations: Not Supported 00:17:50.791 Timestamp: Not Supported 00:17:50.791 Copy: Not Supported 00:17:50.791 Volatile Write Cache: Not Present 00:17:50.791 Atomic Write Unit (Normal): 1 00:17:50.791 Atomic Write Unit (PFail): 1 00:17:50.791 Atomic Compare & Write Unit: 1 00:17:50.791 Fused Compare & Write: Supported 00:17:50.791 Scatter-Gather List 00:17:50.791 SGL Command Set: Supported 00:17:50.791 SGL Keyed: Supported 00:17:50.791 SGL Bit Bucket Descriptor: Not Supported 00:17:50.791 SGL Metadata Pointer: Not Supported 00:17:50.791 Oversized SGL: Not Supported 00:17:50.791 SGL Metadata Address: Not Supported 00:17:50.791 SGL Offset: Supported 00:17:50.791 Transport SGL Data Block: Not Supported 00:17:50.791 Replay Protected Memory Block: Not Supported 00:17:50.791 00:17:50.791 Firmware Slot Information 00:17:50.791 ========================= 00:17:50.791 Active slot: 0 00:17:50.791 00:17:50.791 00:17:50.791 Error Log 00:17:50.791 ========= 00:17:50.791 00:17:50.791 Active Namespaces 00:17:50.791 ================= 00:17:50.791 Discovery Log Page 00:17:50.791 ================== 00:17:50.791 Generation Counter: 2 00:17:50.791 Number of Records: 2 00:17:50.791 Record Format: 0 00:17:50.791 00:17:50.791 Discovery Log Entry 0 00:17:50.791 ---------------------- 00:17:50.791 Transport Type: 3 (TCP) 00:17:50.791 Address Family: 1 (IPv4) 00:17:50.791 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:50.791 Entry Flags: 00:17:50.791 Duplicate Returned Information: 1 00:17:50.791 Explicit Persistent Connection Support for Discovery: 1 00:17:50.791 Transport Requirements: 00:17:50.791 Secure Channel: Not Required 00:17:50.791 Port ID: 0 (0x0000) 00:17:50.791 Controller ID: 65535 (0xffff) 00:17:50.791 Admin Max SQ Size: 128 00:17:50.791 Transport Service Identifier: 4420 00:17:50.791 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:50.791 Transport Address: 10.0.0.2 00:17:50.791 Discovery Log Entry 1 00:17:50.791 ---------------------- 00:17:50.791 Transport Type: 3 (TCP) 00:17:50.791 Address Family: 1 (IPv4) 00:17:50.791 Subsystem Type: 2 (NVM Subsystem) 00:17:50.791 Entry Flags: 00:17:50.791 Duplicate Returned Information: 0 00:17:50.791 Explicit Persistent Connection Support for Discovery: 0 00:17:50.791 Transport Requirements: 00:17:50.791 Secure Channel: Not Required 00:17:50.791 Port ID: 0 (0x0000) 00:17:50.791 Controller ID: 65535 (0xffff) 00:17:50.791 Admin Max SQ Size: 128 00:17:50.791 Transport Service Identifier: 4420 00:17:50.791 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:50.791 Transport Address: 10.0.0.2 [2024-07-25 07:32:23.329179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.791 [2024-07-25 07:32:23.329192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.791 [2024-07-25 07:32:23.329195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696e40) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329289] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:50.791 [2024-07-25 07:32:23.329297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696840) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.791 [2024-07-25 07:32:23.329305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16969c0) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.791 [2024-07-25 07:32:23.329312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696b40) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.791 [2024-07-25 07:32:23.329318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.791 [2024-07-25 07:32:23.329327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.791 [2024-07-25 07:32:23.329337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.791 [2024-07-25 07:32:23.329353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.791 [2024-07-25 07:32:23.329401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.791 [2024-07-25 07:32:23.329405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.791 [2024-07-25 07:32:23.329407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.791 [2024-07-25 07:32:23.329426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.791 [2024-07-25 07:32:23.329439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.791 [2024-07-25 07:32:23.329496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.791 [2024-07-25 07:32:23.329500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.791 [2024-07-25 07:32:23.329502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329508] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:50.791 [2024-07-25 07:32:23.329511] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:50.791 [2024-07-25 07:32:23.329516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.791 [2024-07-25 07:32:23.329525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.791 [2024-07-25 07:32:23.329535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.791 [2024-07-25 07:32:23.329584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.791 [2024-07-25 07:32:23.329588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.791 [2024-07-25 07:32:23.329590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.791 [2024-07-25 07:32:23.329600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.791 [2024-07-25 07:32:23.329605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.791 [2024-07-25 07:32:23.329609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.791 [2024-07-25 07:32:23.329619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.791 [2024-07-25 07:32:23.329654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.329679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.329681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.329690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.792 [2024-07-25 07:32:23.329700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.792 [2024-07-25 07:32:23.329711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.792 [2024-07-25 07:32:23.329748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.329753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.329755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.329764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.792 [2024-07-25 07:32:23.329773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.792 [2024-07-25 07:32:23.329784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.792 [2024-07-25 07:32:23.329821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.329826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.329828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.329837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.792 [2024-07-25 07:32:23.329846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.792 [2024-07-25 07:32:23.329857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.792 [2024-07-25 07:32:23.329894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.329898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.329901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.329909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.792 [2024-07-25 07:32:23.329919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.792 [2024-07-25 07:32:23.329929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.792 [2024-07-25 07:32:23.329972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.329977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.329979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.329987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.329992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.792 [2024-07-25 07:32:23.329997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.792 [2024-07-25 07:32:23.330008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.792 [2024-07-25 07:32:23.330045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.330050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.330052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.330054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.330061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.330063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.330065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.792 [2024-07-25 07:32:23.330070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.792 [2024-07-25 07:32:23.330081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.792 [2024-07-25 07:32:23.334123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.334134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.334137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.334139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.334146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.334149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.334151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653a60) 00:17:50.792 [2024-07-25 07:32:23.334155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.792 [2024-07-25 07:32:23.334169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1696cc0, cid 3, qid 0 00:17:50.792 [2024-07-25 07:32:23.334205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.792 [2024-07-25 07:32:23.334209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.792 [2024-07-25 07:32:23.334211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.334213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1696cc0) on tqpair=0x1653a60 00:17:50.792 [2024-07-25 07:32:23.334218] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:17:50.792 00:17:50.792 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:50.792 [2024-07-25 07:32:23.375148] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:50.792 [2024-07-25 07:32:23.375195] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87038 ] 00:17:50.792 [2024-07-25 07:32:23.503740] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:50.792 [2024-07-25 07:32:23.503796] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:50.792 [2024-07-25 07:32:23.503801] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:50.792 [2024-07-25 07:32:23.503811] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:50.792 [2024-07-25 07:32:23.503818] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:50.792 [2024-07-25 07:32:23.503937] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:50.792 [2024-07-25 07:32:23.503971] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e27a60 0 00:17:50.792 [2024-07-25 07:32:23.519139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:50.792 [2024-07-25 07:32:23.519156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:50.792 [2024-07-25 07:32:23.519160] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:50.792 [2024-07-25 07:32:23.519162] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:50.792 [2024-07-25 07:32:23.519198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.519202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.792 [2024-07-25 07:32:23.519205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:50.792 [2024-07-25 07:32:23.519217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.792 [2024-07-25 07:32:23.519237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.054 [2024-07-25 07:32:23.527133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.054 [2024-07-25 07:32:23.527152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.054 [2024-07-25 07:32:23.527155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.054 [2024-07-25 07:32:23.527167] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:51.054 [2024-07-25 07:32:23.527173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:51.054 [2024-07-25 07:32:23.527177] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:51.054 [2024-07-25 07:32:23.527191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.054 [2024-07-25 07:32:23.527205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.054 [2024-07-25 07:32:23.527228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.054 [2024-07-25 07:32:23.527277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.054 [2024-07-25 07:32:23.527282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.054 [2024-07-25 07:32:23.527284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.054 [2024-07-25 07:32:23.527290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:51.054 [2024-07-25 07:32:23.527295] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:51.054 [2024-07-25 07:32:23.527299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.054 [2024-07-25 07:32:23.527309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.054 [2024-07-25 07:32:23.527320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.054 [2024-07-25 07:32:23.527363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.054 [2024-07-25 07:32:23.527367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.054 [2024-07-25 07:32:23.527386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.054 [2024-07-25 07:32:23.527394] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:51.054 [2024-07-25 07:32:23.527400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:51.054 [2024-07-25 07:32:23.527405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.054 [2024-07-25 07:32:23.527415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.054 [2024-07-25 07:32:23.527427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.054 [2024-07-25 07:32:23.527471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.054 [2024-07-25 07:32:23.527477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.054 [2024-07-25 07:32:23.527481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.054 [2024-07-25 07:32:23.527491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:51.054 [2024-07-25 07:32:23.527501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.054 [2024-07-25 07:32:23.527509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.054 [2024-07-25 07:32:23.527514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.054 [2024-07-25 07:32:23.527528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.054 [2024-07-25 07:32:23.527580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.054 [2024-07-25 07:32:23.527586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.055 [2024-07-25 07:32:23.527589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.055 [2024-07-25 07:32:23.527595] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:51.055 [2024-07-25 07:32:23.527598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:51.055 [2024-07-25 07:32:23.527604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:51.055 [2024-07-25 07:32:23.527708] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:51.055 [2024-07-25 07:32:23.527719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:51.055 [2024-07-25 07:32:23.527727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.527737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.055 [2024-07-25 07:32:23.527753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.055 [2024-07-25 07:32:23.527799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.055 [2024-07-25 07:32:23.527804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.055 [2024-07-25 07:32:23.527807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.055 [2024-07-25 07:32:23.527813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:51.055 [2024-07-25 07:32:23.527820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.527831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.055 [2024-07-25 07:32:23.527844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.055 [2024-07-25 07:32:23.527890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.055 [2024-07-25 07:32:23.527895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.055 [2024-07-25 07:32:23.527897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.055 [2024-07-25 07:32:23.527905] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:51.055 [2024-07-25 07:32:23.527910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.527919] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:51.055 [2024-07-25 07:32:23.527930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.527939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.527942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.527947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.055 [2024-07-25 07:32:23.527961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.055 [2024-07-25 07:32:23.528062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.055 [2024-07-25 07:32:23.528077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.055 [2024-07-25 07:32:23.528080] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528083] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=4096, cccid=0 00:17:51.055 [2024-07-25 07:32:23.528086] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6a840) on tqpair(0x1e27a60): expected_datao=0, payload_size=4096 00:17:51.055 [2024-07-25 07:32:23.528090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528097] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528101] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.055 [2024-07-25 07:32:23.528112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.055 [2024-07-25 07:32:23.528136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.055 [2024-07-25 07:32:23.528147] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:51.055 [2024-07-25 07:32:23.528150] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:51.055 [2024-07-25 07:32:23.528154] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:51.055 [2024-07-25 07:32:23.528163] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:51.055 [2024-07-25 07:32:23.528168] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:51.055 [2024-07-25 07:32:23.528174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.528183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.528190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.528205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.055 [2024-07-25 07:32:23.528223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.055 [2024-07-25 07:32:23.528281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.055 [2024-07-25 07:32:23.528286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.055 [2024-07-25 07:32:23.528288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.055 [2024-07-25 07:32:23.528298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.528308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.055 [2024-07-25 07:32:23.528312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.528322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.055 [2024-07-25 07:32:23.528326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.528336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.055 [2024-07-25 07:32:23.528340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.528349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.055 [2024-07-25 07:32:23.528353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.528358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.528363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e27a60) 00:17:51.055 [2024-07-25 07:32:23.528371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.055 [2024-07-25 07:32:23.528389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a840, cid 0, qid 0 00:17:51.055 [2024-07-25 07:32:23.528393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6a9c0, cid 1, qid 0 00:17:51.055 [2024-07-25 07:32:23.528397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ab40, cid 2, qid 0 00:17:51.055 [2024-07-25 07:32:23.528400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.055 [2024-07-25 07:32:23.528404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ae40, cid 4, qid 0 00:17:51.055 [2024-07-25 07:32:23.528500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.055 [2024-07-25 07:32:23.528509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.055 [2024-07-25 07:32:23.528512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.055 [2024-07-25 07:32:23.528515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ae40) on tqpair=0x1e27a60 00:17:51.055 [2024-07-25 07:32:23.528519] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:51.055 [2024-07-25 07:32:23.528523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.528529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:51.055 [2024-07-25 07:32:23.528533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.528538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e27a60) 00:17:51.056 [2024-07-25 07:32:23.528548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.056 [2024-07-25 07:32:23.528560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ae40, cid 4, qid 0 00:17:51.056 [2024-07-25 07:32:23.528602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.056 [2024-07-25 07:32:23.528607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.056 [2024-07-25 07:32:23.528610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ae40) on tqpair=0x1e27a60 00:17:51.056 [2024-07-25 07:32:23.528670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.528678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.528684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e27a60) 00:17:51.056 [2024-07-25 07:32:23.528692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.056 [2024-07-25 07:32:23.528705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ae40, cid 4, qid 0 00:17:51.056 [2024-07-25 07:32:23.528765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.056 [2024-07-25 07:32:23.528770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.056 [2024-07-25 07:32:23.528772] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528775] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=4096, cccid=4 00:17:51.056 [2024-07-25 07:32:23.528778] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6ae40) on tqpair(0x1e27a60): expected_datao=0, payload_size=4096 00:17:51.056 [2024-07-25 07:32:23.528781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528787] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528790] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.056 [2024-07-25 07:32:23.528801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.056 [2024-07-25 07:32:23.528803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ae40) on tqpair=0x1e27a60 00:17:51.056 [2024-07-25 07:32:23.528814] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:51.056 [2024-07-25 07:32:23.528822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.528829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.528834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e27a60) 00:17:51.056 [2024-07-25 07:32:23.528842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.056 [2024-07-25 07:32:23.528855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ae40, cid 4, qid 0 00:17:51.056 [2024-07-25 07:32:23.528926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.056 [2024-07-25 07:32:23.528932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.056 [2024-07-25 07:32:23.528934] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528937] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=4096, cccid=4 00:17:51.056 [2024-07-25 07:32:23.528940] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6ae40) on tqpair(0x1e27a60): expected_datao=0, payload_size=4096 00:17:51.056 [2024-07-25 07:32:23.528943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528948] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528951] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.056 [2024-07-25 07:32:23.528962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.056 [2024-07-25 07:32:23.528964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ae40) on tqpair=0x1e27a60 00:17:51.056 [2024-07-25 07:32:23.528979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.528987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.528992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.528995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e27a60) 00:17:51.056 [2024-07-25 07:32:23.529000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.056 [2024-07-25 07:32:23.529015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ae40, cid 4, qid 0 00:17:51.056 [2024-07-25 07:32:23.529075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.056 [2024-07-25 07:32:23.529081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.056 [2024-07-25 07:32:23.529083] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529085] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=4096, cccid=4 00:17:51.056 [2024-07-25 07:32:23.529088] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6ae40) on tqpair(0x1e27a60): expected_datao=0, payload_size=4096 00:17:51.056 [2024-07-25 07:32:23.529091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529097] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529099] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.056 [2024-07-25 07:32:23.529110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.056 [2024-07-25 07:32:23.529112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ae40) on tqpair=0x1e27a60 00:17:51.056 [2024-07-25 07:32:23.529134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.529140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.529148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.529152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.529156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.529160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.529164] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:51.056 [2024-07-25 07:32:23.529167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:51.056 [2024-07-25 07:32:23.529170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:51.056 [2024-07-25 07:32:23.529183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e27a60) 00:17:51.056 [2024-07-25 07:32:23.529191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.056 [2024-07-25 07:32:23.529197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e27a60) 00:17:51.056 [2024-07-25 07:32:23.529207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.056 [2024-07-25 07:32:23.529225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ae40, cid 4, qid 0 00:17:51.056 [2024-07-25 07:32:23.529230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6afc0, cid 5, qid 0 00:17:51.056 [2024-07-25 07:32:23.529300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.056 [2024-07-25 07:32:23.529304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.056 [2024-07-25 07:32:23.529307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ae40) on tqpair=0x1e27a60 00:17:51.056 [2024-07-25 07:32:23.529315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.056 [2024-07-25 07:32:23.529319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.056 [2024-07-25 07:32:23.529322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6afc0) on tqpair=0x1e27a60 00:17:51.056 [2024-07-25 07:32:23.529331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.056 [2024-07-25 07:32:23.529334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e27a60) 00:17:51.056 [2024-07-25 07:32:23.529339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.056 [2024-07-25 07:32:23.529351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6afc0, cid 5, qid 0 00:17:51.056 [2024-07-25 07:32:23.529412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.056 [2024-07-25 07:32:23.529417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.056 [2024-07-25 07:32:23.529419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6afc0) on tqpair=0x1e27a60 00:17:51.057 [2024-07-25 07:32:23.529429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e27a60) 00:17:51.057 [2024-07-25 07:32:23.529437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.057 [2024-07-25 07:32:23.529448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6afc0, cid 5, qid 0 00:17:51.057 [2024-07-25 07:32:23.529501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.057 [2024-07-25 07:32:23.529505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.057 [2024-07-25 07:32:23.529508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6afc0) on tqpair=0x1e27a60 00:17:51.057 [2024-07-25 07:32:23.529517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e27a60) 00:17:51.057 [2024-07-25 07:32:23.529525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.057 [2024-07-25 07:32:23.529537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6afc0, cid 5, qid 0 00:17:51.057 [2024-07-25 07:32:23.529590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.057 [2024-07-25 07:32:23.529595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.057 [2024-07-25 07:32:23.529597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6afc0) on tqpair=0x1e27a60 00:17:51.057 [2024-07-25 07:32:23.529612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e27a60) 00:17:51.057 [2024-07-25 07:32:23.529620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.057 [2024-07-25 07:32:23.529626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e27a60) 00:17:51.057 [2024-07-25 07:32:23.529634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.057 [2024-07-25 07:32:23.529639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e27a60) 00:17:51.057 [2024-07-25 07:32:23.529647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.057 [2024-07-25 07:32:23.529653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e27a60) 00:17:51.057 [2024-07-25 07:32:23.529661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.057 [2024-07-25 07:32:23.529674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6afc0, cid 5, qid 0 00:17:51.057 [2024-07-25 07:32:23.529678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6ae40, cid 4, qid 0 00:17:51.057 [2024-07-25 07:32:23.529682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6b140, cid 6, qid 0 00:17:51.057 [2024-07-25 07:32:23.529685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6b2c0, cid 7, qid 0 00:17:51.057 [2024-07-25 07:32:23.529835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.057 [2024-07-25 07:32:23.529848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.057 [2024-07-25 07:32:23.529851] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529854] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=8192, cccid=5 00:17:51.057 [2024-07-25 07:32:23.529857] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6afc0) on tqpair(0x1e27a60): expected_datao=0, payload_size=8192 00:17:51.057 [2024-07-25 07:32:23.529860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529873] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529876] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.057 [2024-07-25 07:32:23.529885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.057 [2024-07-25 07:32:23.529888] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529890] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=512, cccid=4 00:17:51.057 [2024-07-25 07:32:23.529893] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6ae40) on tqpair(0x1e27a60): expected_datao=0, payload_size=512 00:17:51.057 [2024-07-25 07:32:23.529896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529901] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529903] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.057 [2024-07-25 07:32:23.529912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.057 [2024-07-25 07:32:23.529914] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=512, cccid=6 00:17:51.057 [2024-07-25 07:32:23.529920] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6b140) on tqpair(0x1e27a60): expected_datao=0, payload_size=512 00:17:51.057 [2024-07-25 07:32:23.529922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529927] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529930] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.057 [2024-07-25 07:32:23.529938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.057 [2024-07-25 07:32:23.529940] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529943] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e27a60): datao=0, datal=4096, cccid=7 00:17:51.057 [2024-07-25 07:32:23.529946] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e6b2c0) on tqpair(0x1e27a60): expected_datao=0, payload_size=4096 00:17:51.057 [2024-07-25 07:32:23.529948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529954] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529956] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.057 [2024-07-25 07:32:23.529967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.057 [2024-07-25 07:32:23.529969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6afc0) on tqpair=0x1e27a60 00:17:51.057 [2024-07-25 07:32:23.529985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.057 [2024-07-25 07:32:23.529989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.057 [2024-07-25 07:32:23.529992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.529994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ae40) on tqpair=0x1e27a60 00:17:51.057 [2024-07-25 07:32:23.530004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.057 [2024-07-25 07:32:23.530008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.057 [2024-07-25 07:32:23.530011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.530014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6b140) on tqpair=0x1e27a60 00:17:51.057 [2024-07-25 07:32:23.530019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.057 [2024-07-25 07:32:23.530023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.057 [2024-07-25 07:32:23.530026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.057 [2024-07-25 07:32:23.530029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6b2c0) on tqpair=0x1e27a60 00:17:51.057 ===================================================== 00:17:51.057 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.057 ===================================================== 00:17:51.057 Controller Capabilities/Features 00:17:51.057 ================================ 00:17:51.057 Vendor ID: 8086 00:17:51.057 Subsystem Vendor ID: 8086 00:17:51.057 Serial Number: SPDK00000000000001 00:17:51.057 Model Number: SPDK bdev Controller 00:17:51.057 Firmware Version: 24.09 00:17:51.057 Recommended Arb Burst: 6 00:17:51.057 IEEE OUI Identifier: e4 d2 5c 00:17:51.057 Multi-path I/O 00:17:51.057 May have multiple subsystem ports: Yes 00:17:51.057 May have multiple controllers: Yes 00:17:51.057 Associated with SR-IOV VF: No 00:17:51.057 Max Data Transfer Size: 131072 00:17:51.057 Max Number of Namespaces: 32 00:17:51.057 Max Number of I/O Queues: 127 00:17:51.057 NVMe Specification Version (VS): 1.3 00:17:51.057 NVMe Specification Version (Identify): 1.3 00:17:51.057 Maximum Queue Entries: 128 00:17:51.057 Contiguous Queues Required: Yes 00:17:51.057 Arbitration Mechanisms Supported 00:17:51.057 Weighted Round Robin: Not Supported 00:17:51.057 Vendor Specific: Not Supported 00:17:51.057 Reset Timeout: 15000 ms 00:17:51.057 Doorbell Stride: 4 bytes 00:17:51.057 NVM Subsystem Reset: Not Supported 00:17:51.057 Command Sets Supported 00:17:51.057 NVM Command Set: Supported 00:17:51.057 Boot Partition: Not Supported 00:17:51.057 Memory Page Size Minimum: 4096 bytes 00:17:51.057 Memory Page Size Maximum: 4096 bytes 00:17:51.057 Persistent Memory Region: Not Supported 00:17:51.057 Optional Asynchronous Events Supported 00:17:51.057 Namespace Attribute Notices: Supported 00:17:51.057 Firmware Activation Notices: Not Supported 00:17:51.057 ANA Change Notices: Not Supported 00:17:51.058 PLE Aggregate Log Change Notices: Not Supported 00:17:51.058 LBA Status Info Alert Notices: Not Supported 00:17:51.058 EGE Aggregate Log Change Notices: Not Supported 00:17:51.058 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.058 Zone Descriptor Change Notices: Not Supported 00:17:51.058 Discovery Log Change Notices: Not Supported 00:17:51.058 Controller Attributes 00:17:51.058 128-bit Host Identifier: Supported 00:17:51.058 Non-Operational Permissive Mode: Not Supported 00:17:51.058 NVM Sets: Not Supported 00:17:51.058 Read Recovery Levels: Not Supported 00:17:51.058 Endurance Groups: Not Supported 00:17:51.058 Predictable Latency Mode: Not Supported 00:17:51.058 Traffic Based Keep ALive: Not Supported 00:17:51.058 Namespace Granularity: Not Supported 00:17:51.058 SQ Associations: Not Supported 00:17:51.058 UUID List: Not Supported 00:17:51.058 Multi-Domain Subsystem: Not Supported 00:17:51.058 Fixed Capacity Management: Not Supported 00:17:51.058 Variable Capacity Management: Not Supported 00:17:51.058 Delete Endurance Group: Not Supported 00:17:51.058 Delete NVM Set: Not Supported 00:17:51.058 Extended LBA Formats Supported: Not Supported 00:17:51.058 Flexible Data Placement Supported: Not Supported 00:17:51.058 00:17:51.058 Controller Memory Buffer Support 00:17:51.058 ================================ 00:17:51.058 Supported: No 00:17:51.058 00:17:51.058 Persistent Memory Region Support 00:17:51.058 ================================ 00:17:51.058 Supported: No 00:17:51.058 00:17:51.058 Admin Command Set Attributes 00:17:51.058 ============================ 00:17:51.058 Security Send/Receive: Not Supported 00:17:51.058 Format NVM: Not Supported 00:17:51.058 Firmware Activate/Download: Not Supported 00:17:51.058 Namespace Management: Not Supported 00:17:51.058 Device Self-Test: Not Supported 00:17:51.058 Directives: Not Supported 00:17:51.058 NVMe-MI: Not Supported 00:17:51.058 Virtualization Management: Not Supported 00:17:51.058 Doorbell Buffer Config: Not Supported 00:17:51.058 Get LBA Status Capability: Not Supported 00:17:51.058 Command & Feature Lockdown Capability: Not Supported 00:17:51.058 Abort Command Limit: 4 00:17:51.058 Async Event Request Limit: 4 00:17:51.058 Number of Firmware Slots: N/A 00:17:51.058 Firmware Slot 1 Read-Only: N/A 00:17:51.058 Firmware Activation Without Reset: N/A 00:17:51.058 Multiple Update Detection Support: N/A 00:17:51.058 Firmware Update Granularity: No Information Provided 00:17:51.058 Per-Namespace SMART Log: No 00:17:51.058 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.058 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:51.058 Command Effects Log Page: Supported 00:17:51.058 Get Log Page Extended Data: Supported 00:17:51.058 Telemetry Log Pages: Not Supported 00:17:51.058 Persistent Event Log Pages: Not Supported 00:17:51.058 Supported Log Pages Log Page: May Support 00:17:51.058 Commands Supported & Effects Log Page: Not Supported 00:17:51.058 Feature Identifiers & Effects Log Page:May Support 00:17:51.058 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.058 Data Area 4 for Telemetry Log: Not Supported 00:17:51.058 Error Log Page Entries Supported: 128 00:17:51.058 Keep Alive: Supported 00:17:51.058 Keep Alive Granularity: 10000 ms 00:17:51.058 00:17:51.058 NVM Command Set Attributes 00:17:51.058 ========================== 00:17:51.058 Submission Queue Entry Size 00:17:51.058 Max: 64 00:17:51.058 Min: 64 00:17:51.058 Completion Queue Entry Size 00:17:51.058 Max: 16 00:17:51.058 Min: 16 00:17:51.058 Number of Namespaces: 32 00:17:51.058 Compare Command: Supported 00:17:51.058 Write Uncorrectable Command: Not Supported 00:17:51.058 Dataset Management Command: Supported 00:17:51.058 Write Zeroes Command: Supported 00:17:51.058 Set Features Save Field: Not Supported 00:17:51.058 Reservations: Supported 00:17:51.058 Timestamp: Not Supported 00:17:51.058 Copy: Supported 00:17:51.058 Volatile Write Cache: Present 00:17:51.058 Atomic Write Unit (Normal): 1 00:17:51.058 Atomic Write Unit (PFail): 1 00:17:51.058 Atomic Compare & Write Unit: 1 00:17:51.058 Fused Compare & Write: Supported 00:17:51.058 Scatter-Gather List 00:17:51.058 SGL Command Set: Supported 00:17:51.058 SGL Keyed: Supported 00:17:51.058 SGL Bit Bucket Descriptor: Not Supported 00:17:51.058 SGL Metadata Pointer: Not Supported 00:17:51.058 Oversized SGL: Not Supported 00:17:51.058 SGL Metadata Address: Not Supported 00:17:51.058 SGL Offset: Supported 00:17:51.058 Transport SGL Data Block: Not Supported 00:17:51.058 Replay Protected Memory Block: Not Supported 00:17:51.058 00:17:51.058 Firmware Slot Information 00:17:51.058 ========================= 00:17:51.058 Active slot: 1 00:17:51.058 Slot 1 Firmware Revision: 24.09 00:17:51.058 00:17:51.058 00:17:51.058 Commands Supported and Effects 00:17:51.058 ============================== 00:17:51.058 Admin Commands 00:17:51.058 -------------- 00:17:51.058 Get Log Page (02h): Supported 00:17:51.058 Identify (06h): Supported 00:17:51.058 Abort (08h): Supported 00:17:51.058 Set Features (09h): Supported 00:17:51.058 Get Features (0Ah): Supported 00:17:51.058 Asynchronous Event Request (0Ch): Supported 00:17:51.058 Keep Alive (18h): Supported 00:17:51.058 I/O Commands 00:17:51.058 ------------ 00:17:51.058 Flush (00h): Supported LBA-Change 00:17:51.058 Write (01h): Supported LBA-Change 00:17:51.058 Read (02h): Supported 00:17:51.058 Compare (05h): Supported 00:17:51.058 Write Zeroes (08h): Supported LBA-Change 00:17:51.058 Dataset Management (09h): Supported LBA-Change 00:17:51.058 Copy (19h): Supported LBA-Change 00:17:51.058 00:17:51.058 Error Log 00:17:51.058 ========= 00:17:51.058 00:17:51.058 Arbitration 00:17:51.058 =========== 00:17:51.058 Arbitration Burst: 1 00:17:51.058 00:17:51.058 Power Management 00:17:51.058 ================ 00:17:51.058 Number of Power States: 1 00:17:51.058 Current Power State: Power State #0 00:17:51.058 Power State #0: 00:17:51.058 Max Power: 0.00 W 00:17:51.058 Non-Operational State: Operational 00:17:51.058 Entry Latency: Not Reported 00:17:51.058 Exit Latency: Not Reported 00:17:51.058 Relative Read Throughput: 0 00:17:51.058 Relative Read Latency: 0 00:17:51.058 Relative Write Throughput: 0 00:17:51.058 Relative Write Latency: 0 00:17:51.058 Idle Power: Not Reported 00:17:51.058 Active Power: Not Reported 00:17:51.058 Non-Operational Permissive Mode: Not Supported 00:17:51.058 00:17:51.058 Health Information 00:17:51.058 ================== 00:17:51.058 Critical Warnings: 00:17:51.058 Available Spare Space: OK 00:17:51.058 Temperature: OK 00:17:51.058 Device Reliability: OK 00:17:51.058 Read Only: No 00:17:51.058 Volatile Memory Backup: OK 00:17:51.058 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:51.058 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:51.058 Available Spare: 0% 00:17:51.058 Available Spare Threshold: 0% 00:17:51.058 Life Percentage Used:[2024-07-25 07:32:23.530127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.058 [2024-07-25 07:32:23.530131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e27a60) 00:17:51.058 [2024-07-25 07:32:23.530137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.058 [2024-07-25 07:32:23.530153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6b2c0, cid 7, qid 0 00:17:51.058 [2024-07-25 07:32:23.530207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.058 [2024-07-25 07:32:23.530212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.058 [2024-07-25 07:32:23.530214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.058 [2024-07-25 07:32:23.530217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6b2c0) on tqpair=0x1e27a60 00:17:51.058 [2024-07-25 07:32:23.530246] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:51.058 [2024-07-25 07:32:23.530253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a840) on tqpair=0x1e27a60 00:17:51.058 [2024-07-25 07:32:23.530258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.058 [2024-07-25 07:32:23.530262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6a9c0) on tqpair=0x1e27a60 00:17:51.058 [2024-07-25 07:32:23.530266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.058 [2024-07-25 07:32:23.530269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6ab40) on tqpair=0x1e27a60 00:17:51.058 [2024-07-25 07:32:23.530273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.058 [2024-07-25 07:32:23.530276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.058 [2024-07-25 07:32:23.530280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.058 [2024-07-25 07:32:23.530286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.530354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.530359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.530361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.530369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.530470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.530475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.530477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.530484] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:51.059 [2024-07-25 07:32:23.530487] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:51.059 [2024-07-25 07:32:23.530494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.530569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.530574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.530576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.530587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.530656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.530661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.530664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.530674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.530741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.530746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.530748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.530758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.530832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.530837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.530839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.530849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.530923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.530928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.530931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.530941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.530946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.530951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.530963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.531017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.531022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.531024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.531027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.531034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.531037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.531040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.059 [2024-07-25 07:32:23.531045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.059 [2024-07-25 07:32:23.531056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.059 [2024-07-25 07:32:23.531106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.059 [2024-07-25 07:32:23.531111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.059 [2024-07-25 07:32:23.531113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.535128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.059 [2024-07-25 07:32:23.535138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.059 [2024-07-25 07:32:23.535141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.060 [2024-07-25 07:32:23.535143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e27a60) 00:17:51.060 [2024-07-25 07:32:23.535149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.060 [2024-07-25 07:32:23.535165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e6acc0, cid 3, qid 0 00:17:51.060 [2024-07-25 07:32:23.535219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.060 [2024-07-25 07:32:23.535224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.060 [2024-07-25 07:32:23.535226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.060 [2024-07-25 07:32:23.535229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e6acc0) on tqpair=0x1e27a60 00:17:51.060 [2024-07-25 07:32:23.535235] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:17:51.060 0% 00:17:51.060 Data Units Read: 0 00:17:51.060 Data Units Written: 0 00:17:51.060 Host Read Commands: 0 00:17:51.060 Host Write Commands: 0 00:17:51.060 Controller Busy Time: 0 minutes 00:17:51.060 Power Cycles: 0 00:17:51.060 Power On Hours: 0 hours 00:17:51.060 Unsafe Shutdowns: 0 00:17:51.060 Unrecoverable Media Errors: 0 00:17:51.060 Lifetime Error Log Entries: 0 00:17:51.060 Warning Temperature Time: 0 minutes 00:17:51.060 Critical Temperature Time: 0 minutes 00:17:51.060 00:17:51.060 Number of Queues 00:17:51.060 ================ 00:17:51.060 Number of I/O Submission Queues: 127 00:17:51.060 Number of I/O Completion Queues: 127 00:17:51.060 00:17:51.060 Active Namespaces 00:17:51.060 ================= 00:17:51.060 Namespace ID:1 00:17:51.060 Error Recovery Timeout: Unlimited 00:17:51.060 Command Set Identifier: NVM (00h) 00:17:51.060 Deallocate: Supported 00:17:51.060 Deallocated/Unwritten Error: Not Supported 00:17:51.060 Deallocated Read Value: Unknown 00:17:51.060 Deallocate in Write Zeroes: Not Supported 00:17:51.060 Deallocated Guard Field: 0xFFFF 00:17:51.060 Flush: Supported 00:17:51.060 Reservation: Supported 00:17:51.060 Namespace Sharing Capabilities: Multiple Controllers 00:17:51.060 Size (in LBAs): 131072 (0GiB) 00:17:51.060 Capacity (in LBAs): 131072 (0GiB) 00:17:51.060 Utilization (in LBAs): 131072 (0GiB) 00:17:51.060 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:51.060 EUI64: ABCDEF0123456789 00:17:51.060 UUID: ea9e2daf-b8b2-4a44-a62f-8c1ee1ba071c 00:17:51.060 Thin Provisioning: Not Supported 00:17:51.060 Per-NS Atomic Units: Yes 00:17:51.060 Atomic Boundary Size (Normal): 0 00:17:51.060 Atomic Boundary Size (PFail): 0 00:17:51.060 Atomic Boundary Offset: 0 00:17:51.060 Maximum Single Source Range Length: 65535 00:17:51.060 Maximum Copy Length: 65535 00:17:51.060 Maximum Source Range Count: 1 00:17:51.060 NGUID/EUI64 Never Reused: No 00:17:51.060 Namespace Write Protected: No 00:17:51.060 Number of LBA Formats: 1 00:17:51.060 Current LBA Format: LBA Format #00 00:17:51.060 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:51.060 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.060 rmmod nvme_tcp 00:17:51.060 rmmod nvme_fabrics 00:17:51.060 rmmod nvme_keyring 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86974 ']' 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86974 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86974 ']' 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86974 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86974 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:51.060 killing process with pid 86974 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86974' 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86974 00:17:51.060 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86974 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:51.320 00:17:51.320 real 0m2.558s 00:17:51.320 user 0m6.988s 00:17:51.320 sys 0m0.665s 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.320 07:32:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.320 ************************************ 00:17:51.320 END TEST nvmf_identify 00:17:51.320 ************************************ 00:17:51.320 07:32:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:51.320 07:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.320 07:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.320 07:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.320 ************************************ 00:17:51.320 START TEST nvmf_perf 00:17:51.320 ************************************ 00:17:51.320 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:51.580 * Looking for test storage... 00:17:51.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.580 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:51.581 Cannot find device "nvmf_tgt_br" 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.581 Cannot find device "nvmf_tgt_br2" 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:51.581 Cannot find device "nvmf_tgt_br" 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:51.581 Cannot find device "nvmf_tgt_br2" 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:51.581 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:51.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:51.841 00:17:51.841 --- 10.0.0.2 ping statistics --- 00:17:51.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.841 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:51.841 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.841 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:51.841 00:17:51.841 --- 10.0.0.3 ping statistics --- 00:17:51.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.841 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:51.841 00:17:51.841 --- 10.0.0.1 ping statistics --- 00:17:51.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.841 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.841 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=87205 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 87205 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 87205 ']' 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.842 07:32:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:52.101 [2024-07-25 07:32:24.575281] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:17:52.101 [2024-07-25 07:32:24.575350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.101 [2024-07-25 07:32:24.718550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.101 [2024-07-25 07:32:24.807439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.101 [2024-07-25 07:32:24.807486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.101 [2024-07-25 07:32:24.807492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.101 [2024-07-25 07:32:24.807497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.101 [2024-07-25 07:32:24.807501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.101 [2024-07-25 07:32:24.807853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.101 [2024-07-25 07:32:24.808031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.101 [2024-07-25 07:32:24.808124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.101 [2024-07-25 07:32:24.808158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:53.037 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:53.296 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:53.296 07:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:53.296 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:53.296 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:53.554 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:53.554 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:53.554 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:53.554 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:53.554 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.811 [2024-07-25 07:32:26.396667] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.811 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.067 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:54.067 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.067 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:54.067 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:54.323 07:32:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.580 [2024-07-25 07:32:27.108709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.580 07:32:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:54.580 07:32:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:54.580 07:32:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:54.580 07:32:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:54.580 07:32:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:55.953 Initializing NVMe Controllers 00:17:55.953 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:55.954 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:55.954 Initialization complete. Launching workers. 00:17:55.954 ======================================================== 00:17:55.954 Latency(us) 00:17:55.954 Device Information : IOPS MiB/s Average min max 00:17:55.954 PCIE (0000:00:10.0) NSID 1 from core 0: 20352.00 79.50 1572.30 276.00 9476.63 00:17:55.954 ======================================================== 00:17:55.954 Total : 20352.00 79.50 1572.30 276.00 9476.63 00:17:55.954 00:17:55.954 07:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:56.885 Initializing NVMe Controllers 00:17:56.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:56.885 Initialization complete. Launching workers. 00:17:56.885 ======================================================== 00:17:56.885 Latency(us) 00:17:56.885 Device Information : IOPS MiB/s Average min max 00:17:56.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4272.17 16.69 233.86 74.29 4260.89 00:17:56.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.89 0.48 8071.54 4880.06 12052.92 00:17:56.885 ======================================================== 00:17:56.885 Total : 4396.06 17.17 454.74 74.29 12052.92 00:17:56.885 00:17:57.142 07:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:58.517 Initializing NVMe Controllers 00:17:58.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:58.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:58.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:58.517 Initialization complete. Launching workers. 00:17:58.517 ======================================================== 00:17:58.517 Latency(us) 00:17:58.517 Device Information : IOPS MiB/s Average min max 00:17:58.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10677.99 41.71 2997.52 564.43 10192.51 00:17:58.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2706.00 10.57 11910.84 5928.78 23961.38 00:17:58.517 ======================================================== 00:17:58.517 Total : 13383.99 52.28 4799.63 564.43 23961.38 00:17:58.517 00:17:58.517 07:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:58.517 07:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:01.055 Initializing NVMe Controllers 00:18:01.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.055 Controller IO queue size 128, less than required. 00:18:01.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.055 Controller IO queue size 128, less than required. 00:18:01.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:01.055 Initialization complete. Launching workers. 00:18:01.055 ======================================================== 00:18:01.055 Latency(us) 00:18:01.055 Device Information : IOPS MiB/s Average min max 00:18:01.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2207.32 551.83 58646.37 39594.96 102965.44 00:18:01.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 598.41 149.60 220776.20 55695.25 357327.10 00:18:01.055 ======================================================== 00:18:01.055 Total : 2805.73 701.43 93225.60 39594.96 357327.10 00:18:01.055 00:18:01.055 07:32:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:01.055 Initializing NVMe Controllers 00:18:01.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.055 Controller IO queue size 128, less than required. 00:18:01.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.056 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:01.056 Controller IO queue size 128, less than required. 00:18:01.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.056 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:01.056 WARNING: Some requested NVMe devices were skipped 00:18:01.056 No valid NVMe controllers or AIO or URING devices found 00:18:01.314 07:32:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:03.853 Initializing NVMe Controllers 00:18:03.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:03.853 Controller IO queue size 128, less than required. 00:18:03.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:03.853 Controller IO queue size 128, less than required. 00:18:03.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:03.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:03.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:03.853 Initialization complete. Launching workers. 00:18:03.853 00:18:03.853 ==================== 00:18:03.853 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:03.853 TCP transport: 00:18:03.853 polls: 17598 00:18:03.853 idle_polls: 13977 00:18:03.853 sock_completions: 3621 00:18:03.853 nvme_completions: 5871 00:18:03.853 submitted_requests: 8764 00:18:03.853 queued_requests: 1 00:18:03.853 00:18:03.853 ==================== 00:18:03.853 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:03.853 TCP transport: 00:18:03.853 polls: 17681 00:18:03.853 idle_polls: 13660 00:18:03.853 sock_completions: 4021 00:18:03.853 nvme_completions: 6585 00:18:03.853 submitted_requests: 9828 00:18:03.853 queued_requests: 1 00:18:03.853 ======================================================== 00:18:03.853 Latency(us) 00:18:03.853 Device Information : IOPS MiB/s Average min max 00:18:03.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1467.37 366.84 88652.12 54547.46 166842.37 00:18:03.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1645.85 411.46 79377.48 44634.02 154099.36 00:18:03.853 ======================================================== 00:18:03.853 Total : 3113.22 778.30 83748.93 44634.02 166842.37 00:18:03.853 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.853 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.853 rmmod nvme_tcp 00:18:04.113 rmmod nvme_fabrics 00:18:04.113 rmmod nvme_keyring 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 87205 ']' 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 87205 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 87205 ']' 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 87205 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87205 00:18:04.113 killing process with pid 87205 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.113 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.114 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87205' 00:18:04.114 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 87205 00:18:04.114 07:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 87205 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:06.021 00:18:06.021 real 0m14.387s 00:18:06.021 user 0m52.777s 00:18:06.021 sys 0m3.277s 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:06.021 ************************************ 00:18:06.021 END TEST nvmf_perf 00:18:06.021 ************************************ 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.021 ************************************ 00:18:06.021 START TEST nvmf_fio_host 00:18:06.021 ************************************ 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:06.021 * Looking for test storage... 00:18:06.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.021 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:06.022 Cannot find device "nvmf_tgt_br" 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.022 Cannot find device "nvmf_tgt_br2" 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:06.022 Cannot find device "nvmf_tgt_br" 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:06.022 Cannot find device "nvmf_tgt_br2" 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:18:06.022 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:06.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:18:06.282 00:18:06.282 --- 10.0.0.2 ping statistics --- 00:18:06.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.282 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:06.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:06.282 00:18:06.282 --- 10.0.0.3 ping statistics --- 00:18:06.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.282 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:06.282 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:18:06.283 00:18:06.283 --- 10.0.0.1 ping statistics --- 00:18:06.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.283 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.283 07:32:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87691 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87691 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87691 ']' 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.542 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.542 [2024-07-25 07:32:39.086314] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:06.542 [2024-07-25 07:32:39.086382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.542 [2024-07-25 07:32:39.207180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.802 [2024-07-25 07:32:39.296957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.802 [2024-07-25 07:32:39.297004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.802 [2024-07-25 07:32:39.297010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.802 [2024-07-25 07:32:39.297015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.802 [2024-07-25 07:32:39.297018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.802 [2024-07-25 07:32:39.297413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.802 [2024-07-25 07:32:39.297519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.802 [2024-07-25 07:32:39.297596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.802 [2024-07-25 07:32:39.297599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.371 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.371 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:18:07.371 07:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:07.371 [2024-07-25 07:32:40.082977] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.631 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:07.631 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.631 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.631 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:07.631 Malloc1 00:18:07.890 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.891 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:08.150 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.410 [2024-07-25 07:32:40.934991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.410 07:32:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:18:08.410 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:18:08.669 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:18:08.669 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:18:08.669 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.669 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.669 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:18:08.670 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:18:08.670 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:18:08.670 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:18:08.670 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:08.670 07:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.670 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:08.670 fio-3.35 00:18:08.670 Starting 1 thread 00:18:11.207 00:18:11.207 test: (groupid=0, jobs=1): err= 0: pid=87813: Thu Jul 25 07:32:43 2024 00:18:11.207 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(81.8MiB/2006msec) 00:18:11.207 slat (nsec): min=1510, max=436800, avg=1724.64, stdev=3899.70 00:18:11.207 clat (usec): min=3523, max=22618, avg=6428.82, stdev=1081.90 00:18:11.207 lat (usec): min=3526, max=22620, avg=6430.54, stdev=1081.97 00:18:11.207 clat percentiles (usec): 00:18:11.207 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 5669], 20.00th=[ 5997], 00:18:11.207 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:18:11.207 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:18:11.207 | 99.00th=[ 9372], 99.50th=[12911], 99.90th=[21365], 99.95th=[22152], 00:18:11.207 | 99.99th=[22676] 00:18:11.207 bw ( KiB/s): min=41384, max=42155, per=99.96%, avg=41736.75, stdev=318.70, samples=4 00:18:11.207 iops : min=10346, max=10538, avg=10434.00, stdev=79.35, samples=4 00:18:11.207 write: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(81.8MiB/2006msec); 0 zone resets 00:18:11.207 slat (nsec): min=1542, max=344772, avg=1781.95, stdev=2611.82 00:18:11.207 clat (usec): min=2994, max=20735, avg=5774.88, stdev=891.25 00:18:11.207 lat (usec): min=2997, max=20737, avg=5776.67, stdev=891.28 00:18:11.207 clat percentiles (usec): 00:18:11.207 | 1.00th=[ 4359], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5407], 00:18:11.207 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:18:11.207 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:18:11.207 | 99.00th=[ 8094], 99.50th=[10945], 99.90th=[18744], 99.95th=[20055], 00:18:11.207 | 99.99th=[20579] 00:18:11.207 bw ( KiB/s): min=41344, max=42128, per=99.93%, avg=41743.00, stdev=322.00, samples=4 00:18:11.207 iops : min=10336, max=10532, avg=10435.75, stdev=80.50, samples=4 00:18:11.207 lat (msec) : 4=0.31%, 10=98.93%, 20=0.63%, 50=0.13% 00:18:11.207 cpu : usr=72.57%, sys=20.90%, ctx=9, majf=0, minf=6 00:18:11.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:11.207 issued rwts: total=20940,20948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:11.207 00:18:11.207 Run status group 0 (all jobs): 00:18:11.207 READ: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=81.8MiB (85.8MB), run=2006-2006msec 00:18:11.207 WRITE: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=81.8MiB (85.8MB), run=2006-2006msec 00:18:11.207 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:11.207 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:11.207 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:18:11.207 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:11.208 07:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:11.208 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:11.208 fio-3.35 00:18:11.208 Starting 1 thread 00:18:13.749 00:18:13.749 test: (groupid=0, jobs=1): err= 0: pid=87861: Thu Jul 25 07:32:46 2024 00:18:13.749 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(334MiB/2005msec) 00:18:13.749 slat (nsec): min=2367, max=89156, avg=2734.46, stdev=1453.89 00:18:13.749 clat (usec): min=1881, max=15766, avg=7107.06, stdev=1805.83 00:18:13.749 lat (usec): min=1885, max=15785, avg=7109.80, stdev=1806.01 00:18:13.749 clat percentiles (usec): 00:18:13.749 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5538], 00:18:13.749 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 7111], 60.00th=[ 7635], 00:18:13.749 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[10028], 00:18:13.749 | 99.00th=[12125], 99.50th=[12780], 99.90th=[14353], 99.95th=[15139], 00:18:13.749 | 99.99th=[15795] 00:18:13.749 bw ( KiB/s): min=78560, max=95424, per=49.77%, avg=84784.00, stdev=7345.82, samples=4 00:18:13.749 iops : min= 4910, max= 5964, avg=5299.00, stdev=459.11, samples=4 00:18:13.749 write: IOPS=6295, BW=98.4MiB/s (103MB/s)(173MiB/1758msec); 0 zone resets 00:18:13.749 slat (usec): min=27, max=552, avg=30.18, stdev= 9.50 00:18:13.749 clat (usec): min=3526, max=16904, avg=8676.36, stdev=1632.43 00:18:13.749 lat (usec): min=3554, max=17051, avg=8706.54, stdev=1634.62 00:18:13.749 clat percentiles (usec): 00:18:13.749 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7242], 00:18:13.749 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:18:13.749 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:18:13.749 | 99.00th=[12911], 99.50th=[13566], 99.90th=[15926], 99.95th=[16319], 00:18:13.749 | 99.99th=[16712] 00:18:13.749 bw ( KiB/s): min=81184, max=99168, per=87.53%, avg=88168.00, stdev=7712.92, samples=4 00:18:13.749 iops : min= 5074, max= 6198, avg=5510.50, stdev=482.06, samples=4 00:18:13.749 lat (msec) : 2=0.01%, 4=1.66%, 10=87.57%, 20=10.77% 00:18:13.749 cpu : usr=76.31%, sys=16.11%, ctx=4, majf=0, minf=20 00:18:13.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:13.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:13.749 issued rwts: total=21346,11068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:13.749 00:18:13.750 Run status group 0 (all jobs): 00:18:13.750 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=334MiB (350MB), run=2005-2005msec 00:18:13.750 WRITE: bw=98.4MiB/s (103MB/s), 98.4MiB/s-98.4MiB/s (103MB/s-103MB/s), io=173MiB (181MB), run=1758-1758msec 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.750 rmmod nvme_tcp 00:18:13.750 rmmod nvme_fabrics 00:18:13.750 rmmod nvme_keyring 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87691 ']' 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87691 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87691 ']' 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87691 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87691 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87691' 00:18:13.750 killing process with pid 87691 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87691 00:18:13.750 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87691 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:14.320 00:18:14.320 real 0m8.355s 00:18:14.320 user 0m33.698s 00:18:14.320 sys 0m2.147s 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.320 ************************************ 00:18:14.320 END TEST nvmf_fio_host 00:18:14.320 ************************************ 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.320 ************************************ 00:18:14.320 START TEST nvmf_failover 00:18:14.320 ************************************ 00:18:14.320 07:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:14.320 * Looking for test storage... 00:18:14.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.320 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.321 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:14.581 Cannot find device "nvmf_tgt_br" 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.581 Cannot find device "nvmf_tgt_br2" 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:14.581 Cannot find device "nvmf_tgt_br" 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:14.581 Cannot find device "nvmf_tgt_br2" 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:14.581 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:14.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:18:14.842 00:18:14.842 --- 10.0.0.2 ping statistics --- 00:18:14.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.842 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:14.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:14.842 00:18:14.842 --- 10.0.0.3 ping statistics --- 00:18:14.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.842 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:14.842 00:18:14.842 --- 10.0.0.1 ping statistics --- 00:18:14.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.842 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=88078 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 88078 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88078 ']' 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.842 07:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:14.842 [2024-07-25 07:32:47.446289] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:14.842 [2024-07-25 07:32:47.446338] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.101 [2024-07-25 07:32:47.578481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:15.101 [2024-07-25 07:32:47.667976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.101 [2024-07-25 07:32:47.668020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.101 [2024-07-25 07:32:47.668026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.101 [2024-07-25 07:32:47.668030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.102 [2024-07-25 07:32:47.668034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.102 [2024-07-25 07:32:47.669046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.102 [2024-07-25 07:32:47.669195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.102 [2024-07-25 07:32:47.669199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.670 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.670 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:15.670 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.670 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.671 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:15.671 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.671 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:15.930 [2024-07-25 07:32:48.498579] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.930 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:16.189 Malloc0 00:18:16.189 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.449 07:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.449 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.708 [2024-07-25 07:32:49.291091] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.708 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:16.967 [2024-07-25 07:32:49.466895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:16.967 [2024-07-25 07:32:49.642657] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88184 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88184 /var/tmp/bdevperf.sock 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88184 ']' 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.967 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.968 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.968 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.968 07:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:17.906 07:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.906 07:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:17.906 07:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:18.165 NVMe0n1 00:18:18.165 07:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:18.425 00:18:18.425 07:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88230 00:18:18.425 07:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.425 07:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:19.364 07:32:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.624 [2024-07-25 07:32:52.234687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.235993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.236959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.624 [2024-07-25 07:32:52.237303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 [2024-07-25 07:32:52.237818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae50 is same with the state(5) to be set 00:18:19.625 07:32:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:22.919 07:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:22.919 00:18:22.919 07:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:23.178 [2024-07-25 07:32:55.768558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.178 [2024-07-25 07:32:55.768819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 [2024-07-25 07:32:55.768824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 [2024-07-25 07:32:55.768829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 [2024-07-25 07:32:55.768834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 [2024-07-25 07:32:55.768840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 [2024-07-25 07:32:55.768845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 [2024-07-25 07:32:55.768850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 [2024-07-25 07:32:55.768856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186bbd0 is same with the state(5) to be set 00:18:23.179 07:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:26.500 07:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.500 [2024-07-25 07:32:58.976363] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.500 07:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:27.437 07:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:27.696 [2024-07-25 07:33:00.233105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 [2024-07-25 07:33:00.233270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24f30 is same with the state(5) to be set 00:18:27.696 07:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88230 00:18:34.270 0 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88184 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88184 ']' 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88184 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88184 00:18:34.270 killing process with pid 88184 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88184' 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88184 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88184 00:18:34.270 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:34.270 [2024-07-25 07:32:49.698555] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:34.270 [2024-07-25 07:32:49.698641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88184 ] 00:18:34.270 [2024-07-25 07:32:49.837608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.270 [2024-07-25 07:32:49.963378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.270 Running I/O for 15 seconds... 00:18:34.270 [2024-07-25 07:32:52.238273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.270 [2024-07-25 07:32:52.238330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.270 [2024-07-25 07:32:52.238368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.270 [2024-07-25 07:32:52.238383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.270 [2024-07-25 07:32:52.238394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.270 [2024-07-25 07:32:52.238403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.270 [2024-07-25 07:32:52.238420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.270 [2024-07-25 07:32:52.238431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.270 [2024-07-25 07:32:52.238457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.270 [2024-07-25 07:32:52.238465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.270 [2024-07-25 07:32:52.238475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.238986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.238994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.271 [2024-07-25 07:32:52.239231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.271 [2024-07-25 07:32:52.239239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.272 [2024-07-25 07:32:52.239256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.272 [2024-07-25 07:32:52.239277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.272 [2024-07-25 07:32:52.239294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.272 [2024-07-25 07:32:52.239312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.272 [2024-07-25 07:32:52.239336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.272 [2024-07-25 07:32:52.239372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.272 [2024-07-25 07:32:52.239956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.272 [2024-07-25 07:32:52.239964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.239973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.239996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.273 [2024-07-25 07:32:52.240679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.273 [2024-07-25 07:32:52.240687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:52.240705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:52.240723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:52.240740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:52.240762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:52.240779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:52.240796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9d8a0 is same with the state(5) to be set 00:18:34.274 [2024-07-25 07:32:52.240816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.274 [2024-07-25 07:32:52.240823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.274 [2024-07-25 07:32:52.240830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:18:34.274 [2024-07-25 07:32:52.240838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240903] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c9d8a0 was disconnected and freed. reset controller. 00:18:34.274 [2024-07-25 07:32:52.240922] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:34.274 [2024-07-25 07:32:52.240972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.274 [2024-07-25 07:32:52.240982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.240992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.274 [2024-07-25 07:32:52.241001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.241009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.274 [2024-07-25 07:32:52.241018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.241026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.274 [2024-07-25 07:32:52.241037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:52.241046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.274 [2024-07-25 07:32:52.243706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.274 [2024-07-25 07:32:52.243740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4ce30 (9): Bad file descriptor 00:18:34.274 [2024-07-25 07:32:52.272065] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:34.274 [2024-07-25 07:32:55.768969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.274 [2024-07-25 07:32:55.769441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.274 [2024-07-25 07:32:55.769448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.275 [2024-07-25 07:32:55.769852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.275 [2024-07-25 07:32:55.769927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.275 [2024-07-25 07:32:55.769936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.769944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.769953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.769960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.769968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.769975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.769984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.769991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.276 [2024-07-25 07:32:55.770630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.276 [2024-07-25 07:32:55.770639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.277 [2024-07-25 07:32:55.770974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.770991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.770999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.277 [2024-07-25 07:32:55.771268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.277 [2024-07-25 07:32:55.771311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.277 [2024-07-25 07:32:55.771318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38656 len:8 PRP1 0x0 PRP2 0x0 00:18:34.277 [2024-07-25 07:32:55.771328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.277 [2024-07-25 07:32:55.771392] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cc2d90 was disconnected and freed. reset controller. 00:18:34.277 [2024-07-25 07:32:55.771404] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:34.277 [2024-07-25 07:32:55.771451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.277 [2024-07-25 07:32:55.771462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:32:55.771471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.278 [2024-07-25 07:32:55.771480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:32:55.771489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.278 [2024-07-25 07:32:55.771497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:32:55.771506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.278 [2024-07-25 07:32:55.771513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:32:55.771522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.278 [2024-07-25 07:32:55.773996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.278 [2024-07-25 07:32:55.774030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4ce30 (9): Bad file descriptor 00:18:34.278 [2024-07-25 07:32:55.802232] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:34.278 [2024-07-25 07:33:00.233797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.233859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.233876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.233884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.233893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.233902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.233912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.233919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.233927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.233934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.233971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.233978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.233987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.233993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.278 [2024-07-25 07:33:00.234271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.278 [2024-07-25 07:33:00.234482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.278 [2024-07-25 07:33:00.234491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.279 [2024-07-25 07:33:00.234499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.279 [2024-07-25 07:33:00.234515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.279 [2024-07-25 07:33:00.234532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.279 [2024-07-25 07:33:00.234548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.279 [2024-07-25 07:33:00.234565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.279 [2024-07-25 07:33:00.234581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.234984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.234991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.279 [2024-07-25 07:33:00.235174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.279 [2024-07-25 07:33:00.235188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.280 [2024-07-25 07:33:00.235623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.280 [2024-07-25 07:33:00.235633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:34.281 [2024-07-25 07:33:00.235809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.235991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.235999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.281 [2024-07-25 07:33:00.236208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:34.281 [2024-07-25 07:33:00.236241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:34.281 [2024-07-25 07:33:00.236247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59432 len:8 PRP1 0x0 PRP2 0x0 00:18:34.281 [2024-07-25 07:33:00.236256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236320] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd9bb0 was disconnected and freed. reset controller. 00:18:34.281 [2024-07-25 07:33:00.236330] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:34.281 [2024-07-25 07:33:00.236374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.281 [2024-07-25 07:33:00.236384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.281 [2024-07-25 07:33:00.236407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.281 [2024-07-25 07:33:00.236423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.281 [2024-07-25 07:33:00.236431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.282 [2024-07-25 07:33:00.236438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.282 [2024-07-25 07:33:00.236447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.282 [2024-07-25 07:33:00.236476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4ce30 (9): Bad file descriptor 00:18:34.282 [2024-07-25 07:33:00.238949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.282 [2024-07-25 07:33:00.273014] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:34.282 00:18:34.282 Latency(us) 00:18:34.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.282 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:34.282 Verification LBA range: start 0x0 length 0x4000 00:18:34.282 NVMe0n1 : 15.01 11036.12 43.11 322.83 0.00 11247.15 1430.92 21520.99 00:18:34.282 =================================================================================================================== 00:18:34.282 Total : 11036.12 43.11 322.83 0.00 11247.15 1430.92 21520.99 00:18:34.282 Received shutdown signal, test time was about 15.000000 seconds 00:18:34.282 00:18:34.282 Latency(us) 00:18:34.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.282 =================================================================================================================== 00:18:34.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88435 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88435 /var/tmp/bdevperf.sock 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88435 ']' 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.282 07:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:34.849 07:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.849 07:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:34.849 07:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.108 [2024-07-25 07:33:07.599735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:35.108 07:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:35.108 [2024-07-25 07:33:07.803862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:35.108 07:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:35.367 NVMe0n1 00:18:35.367 07:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:35.625 00:18:35.625 07:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:35.883 00:18:36.142 07:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:36.142 07:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:36.142 07:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:36.401 07:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:39.695 07:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:39.695 07:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:39.695 07:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88572 00:18:39.695 07:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.695 07:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88572 00:18:40.642 0 00:18:40.642 07:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:40.642 [2024-07-25 07:33:06.563291] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:40.642 [2024-07-25 07:33:06.563474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88435 ] 00:18:40.642 [2024-07-25 07:33:06.702041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.642 [2024-07-25 07:33:06.825985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.642 [2024-07-25 07:33:08.984292] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:40.642 [2024-07-25 07:33:08.984452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.642 [2024-07-25 07:33:08.984470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.642 [2024-07-25 07:33:08.984483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.642 [2024-07-25 07:33:08.984492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.642 [2024-07-25 07:33:08.984502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.642 [2024-07-25 07:33:08.984511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.642 [2024-07-25 07:33:08.984519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.642 [2024-07-25 07:33:08.984527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.642 [2024-07-25 07:33:08.984535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.642 [2024-07-25 07:33:08.984569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.642 [2024-07-25 07:33:08.984592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84ae30 (9): Bad file descriptor 00:18:40.642 [2024-07-25 07:33:08.989355] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:40.642 Running I/O for 1 seconds... 00:18:40.642 00:18:40.642 Latency(us) 00:18:40.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.642 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:40.642 Verification LBA range: start 0x0 length 0x4000 00:18:40.642 NVMe0n1 : 1.01 10899.77 42.58 0.00 0.00 11676.01 1459.54 12878.25 00:18:40.642 =================================================================================================================== 00:18:40.642 Total : 10899.77 42.58 0.00 0.00 11676.01 1459.54 12878.25 00:18:40.642 07:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:40.642 07:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:40.900 07:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:41.158 07:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:41.158 07:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:41.417 07:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:41.417 07:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88435 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88435 ']' 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88435 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88435 00:18:44.707 killing process with pid 88435 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88435' 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88435 00:18:44.707 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88435 00:18:44.966 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:44.966 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.225 rmmod nvme_tcp 00:18:45.225 rmmod nvme_fabrics 00:18:45.225 rmmod nvme_keyring 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 88078 ']' 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 88078 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88078 ']' 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88078 00:18:45.225 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:45.226 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.226 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88078 00:18:45.486 killing process with pid 88078 00:18:45.486 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:45.486 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:45.486 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88078' 00:18:45.486 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88078 00:18:45.486 07:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88078 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.486 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:45.745 ************************************ 00:18:45.745 END TEST nvmf_failover 00:18:45.745 ************************************ 00:18:45.745 00:18:45.745 real 0m31.368s 00:18:45.745 user 2m1.240s 00:18:45.745 sys 0m4.199s 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.745 ************************************ 00:18:45.745 START TEST nvmf_host_discovery 00:18:45.745 ************************************ 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:45.745 * Looking for test storage... 00:18:45.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.745 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.004 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:46.005 Cannot find device "nvmf_tgt_br" 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.005 Cannot find device "nvmf_tgt_br2" 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:46.005 Cannot find device "nvmf_tgt_br" 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:46.005 Cannot find device "nvmf_tgt_br2" 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:46.005 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:46.264 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:46.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:18:46.265 00:18:46.265 --- 10.0.0.2 ping statistics --- 00:18:46.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.265 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:46.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:18:46.265 00:18:46.265 --- 10.0.0.3 ping statistics --- 00:18:46.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.265 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:46.265 00:18:46.265 --- 10.0.0.1 ping statistics --- 00:18:46.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.265 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88863 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88863 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88863 ']' 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.265 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.265 [2024-07-25 07:33:18.878088] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:46.265 [2024-07-25 07:33:18.878164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.524 [2024-07-25 07:33:19.014919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.524 [2024-07-25 07:33:19.096165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.524 [2024-07-25 07:33:19.096210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.524 [2024-07-25 07:33:19.096216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.524 [2024-07-25 07:33:19.096221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.524 [2024-07-25 07:33:19.096224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.524 [2024-07-25 07:33:19.096260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 [2024-07-25 07:33:19.769815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 [2024-07-25 07:33:19.777888] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 null0 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 null1 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88921 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88921 /tmp/host.sock 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88921 ']' 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:47.093 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.093 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.352 [2024-07-25 07:33:19.870728] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:47.352 [2024-07-25 07:33:19.870786] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88921 ] 00:18:47.352 [2024-07-25 07:33:20.009045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.616 [2024-07-25 07:33:20.126918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.222 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.223 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.482 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.482 [2024-07-25 07:33:21.075780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.482 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.741 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.742 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.742 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.742 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.742 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.742 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.742 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:48.742 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:49.001 [2024-07-25 07:33:21.730479] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:49.001 [2024-07-25 07:33:21.730519] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:49.001 [2024-07-25 07:33:21.730532] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:49.260 [2024-07-25 07:33:21.816426] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:49.260 [2024-07-25 07:33:21.873396] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:49.260 [2024-07-25 07:33:21.873436] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:49.831 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.092 [2024-07-25 07:33:22.621293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:50.092 [2024-07-25 07:33:22.621635] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:50.092 [2024-07-25 07:33:22.621672] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.092 [2024-07-25 07:33:22.707517] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:50.092 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.092 [2024-07-25 07:33:22.771658] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:50.092 [2024-07-25 07:33:22.771681] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:50.093 [2024-07-25 07:33:22.771685] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:50.093 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:50.093 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.472 [2024-07-25 07:33:23.912216] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:51.472 [2024-07-25 07:33:23.912256] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.472 [2024-07-25 07:33:23.916812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.472 [2024-07-25 07:33:23.916839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.472 [2024-07-25 07:33:23.916847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.472 [2024-07-25 07:33:23.916853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.472 [2024-07-25 07:33:23.916859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.472 [2024-07-25 07:33:23.916865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.472 [2024-07-25 07:33:23.916871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.472 [2024-07-25 07:33:23.916876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.472 [2024-07-25 07:33:23.916881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:51.472 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:51.473 [2024-07-25 07:33:23.926749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.936748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.473 [2024-07-25 07:33:23.936869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.473 [2024-07-25 07:33:23.936884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8c50 with addr=10.0.0.2, port=4420 00:18:51.473 [2024-07-25 07:33:23.936891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.473 [2024-07-25 07:33:23.936902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.936912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.473 [2024-07-25 07:33:23.936918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.473 [2024-07-25 07:33:23.936925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.473 [2024-07-25 07:33:23.936935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.473 [2024-07-25 07:33:23.946791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.473 [2024-07-25 07:33:23.946845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.473 [2024-07-25 07:33:23.946855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8c50 with addr=10.0.0.2, port=4420 00:18:51.473 [2024-07-25 07:33:23.946861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.473 [2024-07-25 07:33:23.946870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.946879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.473 [2024-07-25 07:33:23.946884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.473 [2024-07-25 07:33:23.946889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.473 [2024-07-25 07:33:23.946898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.473 [2024-07-25 07:33:23.956808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.473 [2024-07-25 07:33:23.956880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.473 [2024-07-25 07:33:23.956890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8c50 with addr=10.0.0.2, port=4420 00:18:51.473 [2024-07-25 07:33:23.956896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.473 [2024-07-25 07:33:23.956905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.956913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.473 [2024-07-25 07:33:23.956918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.473 [2024-07-25 07:33:23.956923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.473 [2024-07-25 07:33:23.956931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.473 [2024-07-25 07:33:23.966826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.473 [2024-07-25 07:33:23.966882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.473 [2024-07-25 07:33:23.966893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8c50 with addr=10.0.0.2, port=4420 00:18:51.473 [2024-07-25 07:33:23.966899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.473 [2024-07-25 07:33:23.966908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.966916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.473 [2024-07-25 07:33:23.966921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.473 [2024-07-25 07:33:23.966927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.473 [2024-07-25 07:33:23.966935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:51.473 [2024-07-25 07:33:23.976843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.473 [2024-07-25 07:33:23.976886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.473 [2024-07-25 07:33:23.976895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8c50 with addr=10.0.0.2, port=4420 00:18:51.473 [2024-07-25 07:33:23.976901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.473 [2024-07-25 07:33:23.976909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.976916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.473 [2024-07-25 07:33:23.976922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.473 [2024-07-25 07:33:23.976928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.473 [2024-07-25 07:33:23.976936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:51.473 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.473 [2024-07-25 07:33:23.986851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.473 [2024-07-25 07:33:23.986909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.473 [2024-07-25 07:33:23.986920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8c50 with addr=10.0.0.2, port=4420 00:18:51.473 [2024-07-25 07:33:23.986926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.473 [2024-07-25 07:33:23.986936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.986944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.473 [2024-07-25 07:33:23.986949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.473 [2024-07-25 07:33:23.986954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.473 [2024-07-25 07:33:23.986963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.473 [2024-07-25 07:33:23.996869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.473 [2024-07-25 07:33:23.996918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.473 [2024-07-25 07:33:23.996928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8c50 with addr=10.0.0.2, port=4420 00:18:51.473 [2024-07-25 07:33:23.996934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8c50 is same with the state(5) to be set 00:18:51.473 [2024-07-25 07:33:23.996943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f8c50 (9): Bad file descriptor 00:18:51.473 [2024-07-25 07:33:23.996950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.473 [2024-07-25 07:33:23.996955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.473 [2024-07-25 07:33:23.996960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.473 [2024-07-25 07:33:23.996968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.473 [2024-07-25 07:33:23.999346] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:51.473 [2024-07-25 07:33:23.999367] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.473 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:51.474 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:51.733 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:51.734 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:51.734 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.734 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:51.734 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.734 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.671 [2024-07-25 07:33:25.333927] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:52.671 [2024-07-25 07:33:25.333978] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:52.671 [2024-07-25 07:33:25.334009] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:52.931 [2024-07-25 07:33:25.419843] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:52.931 [2024-07-25 07:33:25.480103] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:52.931 [2024-07-25 07:33:25.480161] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.931 2024/07/25 07:33:25 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:52.931 request: 00:18:52.931 { 00:18:52.931 "method": "bdev_nvme_start_discovery", 00:18:52.931 "params": { 00:18:52.931 "name": "nvme", 00:18:52.931 "trtype": "tcp", 00:18:52.931 "traddr": "10.0.0.2", 00:18:52.931 "adrfam": "ipv4", 00:18:52.931 "trsvcid": "8009", 00:18:52.931 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:52.931 "wait_for_attach": true 00:18:52.931 } 00:18:52.931 } 00:18:52.931 Got JSON-RPC error response 00:18:52.931 GoRPCClient: error on JSON-RPC call 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:52.931 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.932 2024/07/25 07:33:25 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:52.932 request: 00:18:52.932 { 00:18:52.932 "method": "bdev_nvme_start_discovery", 00:18:52.932 "params": { 00:18:52.932 "name": "nvme_second", 00:18:52.932 "trtype": "tcp", 00:18:52.932 "traddr": "10.0.0.2", 00:18:52.932 "adrfam": "ipv4", 00:18:52.932 "trsvcid": "8009", 00:18:52.932 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:52.932 "wait_for_attach": true 00:18:52.932 } 00:18:52.932 } 00:18:52.932 Got JSON-RPC error response 00:18:52.932 GoRPCClient: error on JSON-RPC call 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:52.932 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.192 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.131 [2024-07-25 07:33:26.734558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.131 [2024-07-25 07:33:26.734640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f27b0 with addr=10.0.0.2, port=8010 00:18:54.131 [2024-07-25 07:33:26.734666] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:54.131 [2024-07-25 07:33:26.734673] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:54.131 [2024-07-25 07:33:26.734681] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:55.070 [2024-07-25 07:33:27.732618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.070 [2024-07-25 07:33:27.732693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f27b0 with addr=10.0.0.2, port=8010 00:18:55.070 [2024-07-25 07:33:27.732716] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:55.070 [2024-07-25 07:33:27.732723] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:55.070 [2024-07-25 07:33:27.732730] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:56.013 [2024-07-25 07:33:28.730545] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:56.013 2024/07/25 07:33:28 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:56.013 request: 00:18:56.013 { 00:18:56.014 "method": "bdev_nvme_start_discovery", 00:18:56.014 "params": { 00:18:56.014 "name": "nvme_second", 00:18:56.014 "trtype": "tcp", 00:18:56.014 "traddr": "10.0.0.2", 00:18:56.014 "adrfam": "ipv4", 00:18:56.014 "trsvcid": "8010", 00:18:56.014 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:56.014 "wait_for_attach": false, 00:18:56.014 "attach_timeout_ms": 3000 00:18:56.014 } 00:18:56.014 } 00:18:56.014 Got JSON-RPC error response 00:18:56.014 GoRPCClient: error on JSON-RPC call 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:56.014 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88921 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.274 rmmod nvme_tcp 00:18:56.274 rmmod nvme_fabrics 00:18:56.274 rmmod nvme_keyring 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88863 ']' 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88863 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88863 ']' 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88863 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88863 00:18:56.274 killing process with pid 88863 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88863' 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88863 00:18:56.274 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88863 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:56.534 00:18:56.534 real 0m10.897s 00:18:56.534 user 0m21.217s 00:18:56.534 sys 0m1.789s 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.534 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:56.534 ************************************ 00:18:56.534 END TEST nvmf_host_discovery 00:18:56.534 ************************************ 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.793 ************************************ 00:18:56.793 START TEST nvmf_host_multipath_status 00:18:56.793 ************************************ 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:56.793 * Looking for test storage... 00:18:56.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:56.793 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:56.794 Cannot find device "nvmf_tgt_br" 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:56.794 Cannot find device "nvmf_tgt_br2" 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:56.794 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:57.054 Cannot find device "nvmf_tgt_br" 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:57.054 Cannot find device "nvmf_tgt_br2" 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:57.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:18:57.054 00:18:57.054 --- 10.0.0.2 ping statistics --- 00:18:57.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.054 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:57.054 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:57.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:57.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:57.054 00:18:57.054 --- 10.0.0.3 ping statistics --- 00:18:57.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.054 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:57.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:18:57.055 00:18:57.055 --- 10.0.0.1 ping statistics --- 00:18:57.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.055 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89398 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89398 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89398 ']' 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.055 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:57.314 [2024-07-25 07:33:29.831995] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:18:57.314 [2024-07-25 07:33:29.832053] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.314 [2024-07-25 07:33:29.961840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:57.574 [2024-07-25 07:33:30.084321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.574 [2024-07-25 07:33:30.084370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.574 [2024-07-25 07:33:30.084376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.574 [2024-07-25 07:33:30.084381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.574 [2024-07-25 07:33:30.084385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.574 [2024-07-25 07:33:30.085484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.574 [2024-07-25 07:33:30.085486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89398 00:18:58.142 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:58.401 [2024-07-25 07:33:30.902482] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.401 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:58.401 Malloc0 00:18:58.659 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:58.659 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.918 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.177 [2024-07-25 07:33:31.667612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:59.177 [2024-07-25 07:33:31.843341] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89505 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89505 /var/tmp/bdevperf.sock 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89505 ']' 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.177 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:00.113 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.113 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:19:00.113 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:00.373 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:00.632 Nvme0n1 00:19:00.632 07:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:00.892 Nvme0n1 00:19:00.892 07:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:00.892 07:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:03.426 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:03.426 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:03.426 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:03.426 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:04.416 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:04.416 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:04.416 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.416 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.416 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.416 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:04.416 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.416 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.675 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.675 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.675 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.675 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.934 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.934 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:04.934 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.934 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.193 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.452 07:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.452 07:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:05.452 07:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:05.711 07:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:05.970 07:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:06.907 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:06.907 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:06.907 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.907 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.166 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.425 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.425 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.425 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.425 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.685 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:07.944 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.944 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:07.944 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:08.203 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:08.203 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:09.581 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:09.581 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:09.581 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.581 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:09.581 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.581 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:09.581 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:09.581 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.840 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.099 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.099 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:10.099 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:10.100 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.358 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.358 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:10.358 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:10.358 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.617 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.617 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:10.617 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:10.617 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:10.875 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:11.811 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:11.811 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:11.811 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.811 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:12.070 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.070 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:12.070 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.070 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:12.329 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:12.329 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:12.329 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.329 07:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:12.589 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.848 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.848 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:12.848 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.848 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:13.107 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.107 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:13.107 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:13.366 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:13.366 07:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.765 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:15.024 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.024 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:15.024 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.024 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:15.283 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.283 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:15.283 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.283 07:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:15.542 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.542 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:15.542 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.542 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:15.542 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.543 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:15.543 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:15.802 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:16.060 07:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:16.997 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:16.997 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:16.997 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.997 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:17.256 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:17.256 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:17.256 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.256 07:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.516 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.776 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.776 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:17.776 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:17.776 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.035 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:18.035 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.035 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.035 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.294 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.294 07:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:18.294 07:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:18.294 07:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:18.554 07:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:18.813 07:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:19.749 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:19.749 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.749 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.749 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:20.008 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.008 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:20.008 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.008 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.267 07:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:20.525 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.525 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:20.525 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.525 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.782 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.782 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:20.782 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.782 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.041 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.041 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:21.041 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:21.041 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:21.298 07:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:22.233 07:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:22.234 07:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:22.234 07:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.234 07:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:22.492 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.492 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:22.493 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.493 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:22.751 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.751 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:22.751 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.751 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.010 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:23.269 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.269 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:23.269 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.269 07:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:23.527 07:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.527 07:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:23.527 07:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:23.786 07:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:23.786 07:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.198 07:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:25.457 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.457 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:25.457 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.457 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.716 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:25.975 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.975 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:25.975 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:26.235 07:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:26.494 07:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:27.429 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:27.429 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:27.429 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.429 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:27.688 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.688 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:27.688 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.688 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.947 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.207 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.207 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:28.207 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.207 07:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:28.466 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.466 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:28.466 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.466 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89505 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89505 ']' 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89505 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89505 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:28.725 killing process with pid 89505 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89505' 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89505 00:19:28.725 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89505 00:19:28.725 Connection closed with partial response: 00:19:28.725 00:19:28.725 00:19:29.003 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89505 00:19:29.003 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:29.003 [2024-07-25 07:33:31.917722] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:29.003 [2024-07-25 07:33:31.917817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89505 ] 00:19:29.003 [2024-07-25 07:33:32.055693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.003 [2024-07-25 07:33:32.180909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.003 Running I/O for 90 seconds... 00:19:29.003 [2024-07-25 07:33:45.862604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.862978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.862991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.003 [2024-07-25 07:33:45.863342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.003 [2024-07-25 07:33:45.863351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.004 [2024-07-25 07:33:45.863676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.863853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.863862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-07-25 07:33:45.864710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.004 [2024-07-25 07:33:45.864724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.864981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.864989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.005 [2024-07-25 07:33:45.865624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.005 [2024-07-25 07:33:45.865637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.865837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.865860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.865882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.865905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.865928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.865949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.865973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.865986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.865995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.006 [2024-07-25 07:33:45.866844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.866872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.866895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.866918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.866941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.866962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.866976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.866985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.006 [2024-07-25 07:33:45.867209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.006 [2024-07-25 07:33:45.867218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.007 [2024-07-25 07:33:45.867846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.007 [2024-07-25 07:33:45.867960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.007 [2024-07-25 07:33:45.867971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.867984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.867993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.868983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.868992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.008 [2024-07-25 07:33:45.869309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.008 [2024-07-25 07:33:45.869317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.009 [2024-07-25 07:33:45.869921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.869944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.869968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.869981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.869989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.009 [2024-07-25 07:33:45.870192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.009 [2024-07-25 07:33:45.870205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.010 [2024-07-25 07:33:45.870213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.870229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.010 [2024-07-25 07:33:45.870237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.870878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.010 [2024-07-25 07:33:45.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.870914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.010 [2024-07-25 07:33:45.870922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.870936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.870953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.870967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.870975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.870990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.870998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.871349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.871357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.010 [2024-07-25 07:33:45.877411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.010 [2024-07-25 07:33:45.877424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.010 [2024-07-25 07:33:45.877432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.011 [2024-07-25 07:33:45.877605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.877626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.877647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.877667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.877688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.877709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.877721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.877729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.011 [2024-07-25 07:33:45.878726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.011 [2024-07-25 07:33:45.878739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.878981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.878994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.012 [2024-07-25 07:33:45.879406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.012 [2024-07-25 07:33:45.879419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.879737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.879978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.879992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.880670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.880696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.013 [2024-07-25 07:33:45.880718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.013 [2024-07-25 07:33:45.880921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.013 [2024-07-25 07:33:45.880935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.880943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.880957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.880964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.880978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.880987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.014 [2024-07-25 07:33:45.881657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.014 [2024-07-25 07:33:45.881750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.014 [2024-07-25 07:33:45.881763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.881771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.882205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.882221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.882237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.882245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.882259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.882269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.882283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.882291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.882305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.882314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.882329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.882337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.895983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.895993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.896009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.896019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.896035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.896045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.896061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.015 [2024-07-25 07:33:45.896071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.015 [2024-07-25 07:33:45.896088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.896945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.896955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.897752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.897775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.897796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.016 [2024-07-25 07:33:45.897807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.897824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.016 [2024-07-25 07:33:45.897835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.897851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.016 [2024-07-25 07:33:45.897862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.016 [2024-07-25 07:33:45.897879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.016 [2024-07-25 07:33:45.897890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.897906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.897917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.897934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.897961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.897979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.897989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.017 [2024-07-25 07:33:45.898248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.017 [2024-07-25 07:33:45.898905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.017 [2024-07-25 07:33:45.898919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.898942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.898956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.898979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.898993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.018 [2024-07-25 07:33:45.899760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.899894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.899908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.900748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.900774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.900800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.900815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.900837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.900851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.900875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.900889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.900911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.900925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.900960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.900975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.900998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.901011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.901034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.901047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.901070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.901107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.901136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.901160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.901174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.018 [2024-07-25 07:33:45.901197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.018 [2024-07-25 07:33:45.901210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.901983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.901997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.019 [2024-07-25 07:33:45.902653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.019 [2024-07-25 07:33:45.902675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.902975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.902989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.903012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.903027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.903049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.903063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.903086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.903100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.903133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.903147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.903170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.903185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.020 [2024-07-25 07:33:45.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.904976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.904998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.020 [2024-07-25 07:33:45.905012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.020 [2024-07-25 07:33:45.905034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.905976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.905990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.021 [2024-07-25 07:33:45.906026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.021 [2024-07-25 07:33:45.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.021 [2024-07-25 07:33:45.906106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.021 [2024-07-25 07:33:45.906155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.021 [2024-07-25 07:33:45.906192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.021 [2024-07-25 07:33:45.906229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.021 [2024-07-25 07:33:45.906265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.021 [2024-07-25 07:33:45.906289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.021 [2024-07-25 07:33:45.906302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.906325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.022 [2024-07-25 07:33:45.906339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.906361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.022 [2024-07-25 07:33:45.906376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.906399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.906413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.906436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.906461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.906485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.906499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.907972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.907995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.022 [2024-07-25 07:33:45.908589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.022 [2024-07-25 07:33:45.908604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.908974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.908983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.909373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.909383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.910025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.910054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.910078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.023 [2024-07-25 07:33:45.910103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.023 [2024-07-25 07:33:45.910141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.023 [2024-07-25 07:33:45.910166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.023 [2024-07-25 07:33:45.910192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.023 [2024-07-25 07:33:45.910207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.023 [2024-07-25 07:33:45.910217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.024 [2024-07-25 07:33:45.910514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.910989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.910999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.024 [2024-07-25 07:33:45.911208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.024 [2024-07-25 07:33:45.911223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.025 [2024-07-25 07:33:45.911587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.911628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.911638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.025 [2024-07-25 07:33:45.912788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.025 [2024-07-25 07:33:45.912803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.912982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.912996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.913585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.913595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.917808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.917823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.026 [2024-07-25 07:33:45.917839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.026 [2024-07-25 07:33:45.917848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.917864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.917874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.917896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.917905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.917920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.917930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.917945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.917954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.917969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.917978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.917993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.918854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.918890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.918915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.918940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.918965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.918982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.918991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.027 [2024-07-25 07:33:45.919263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.027 [2024-07-25 07:33:45.919466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.027 [2024-07-25 07:33:45.919482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.919986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.919996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.920021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.920047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.920072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.920097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.920133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.028 [2024-07-25 07:33:45.920369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.028 [2024-07-25 07:33:45.920384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.028 [2024-07-25 07:33:45.920394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.920971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.920990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.029 [2024-07-25 07:33:45.921751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.029 [2024-07-25 07:33:45.921771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.921800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.921830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.921864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.921893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.921923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.921952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.921982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.921992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:45.922748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:45.922767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:59.016417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:59.016493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:59.016535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:59.016545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:59.016560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:59.016569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:59.016595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:59.016603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:59.016616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.030 [2024-07-25 07:33:59.016623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.030 [2024-07-25 07:33:59.016637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.030 [2024-07-25 07:33:59.016644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.016892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.016913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.016935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.016948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.016955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.018269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.018290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.031 [2024-07-25 07:33:59.018311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.018833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.018841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.021603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.031 [2024-07-25 07:33:59.021626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.031 [2024-07-25 07:33:59.021644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.021653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.021675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.021697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.021719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.021833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.021864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.021990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.021998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.032 [2024-07-25 07:33:59.022302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.022325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.022926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.022952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.022974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.022988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.022998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.032 [2024-07-25 07:33:59.023019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.032 [2024-07-25 07:33:59.023028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.023265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.023287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.023314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.023354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.023363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.024088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.024117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.033 [2024-07-25 07:33:59.024150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:29.033 [2024-07-25 07:33:59.024367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.033 [2024-07-25 07:33:59.024375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.033 Received shutdown signal, test time was about 27.673843 seconds 00:19:29.033 00:19:29.033 Latency(us) 00:19:29.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.033 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.033 Verification LBA range: start 0x0 length 0x4000 00:19:29.033 Nvme0n1 : 27.67 9344.01 36.50 0.00 0.00 13679.30 115.37 3077043.98 00:19:29.033 =================================================================================================================== 00:19:29.033 Total : 9344.01 36.50 0.00 0.00 13679.30 115.37 3077043.98 00:19:29.033 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.293 rmmod nvme_tcp 00:19:29.293 rmmod nvme_fabrics 00:19:29.293 rmmod nvme_keyring 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89398 ']' 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89398 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89398 ']' 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89398 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89398 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:29.293 killing process with pid 89398 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89398' 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89398 00:19:29.293 07:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89398 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.553 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:29.813 00:19:29.813 real 0m33.073s 00:19:29.813 user 1m45.454s 00:19:29.813 sys 0m7.925s 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:29.813 ************************************ 00:19:29.813 END TEST nvmf_host_multipath_status 00:19:29.813 ************************************ 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.813 ************************************ 00:19:29.813 START TEST nvmf_discovery_remove_ifc 00:19:29.813 ************************************ 00:19:29.813 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:29.813 * Looking for test storage... 00:19:30.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:30.073 Cannot find device "nvmf_tgt_br" 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.073 Cannot find device "nvmf_tgt_br2" 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:30.073 Cannot find device "nvmf_tgt_br" 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:19:30.073 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:30.074 Cannot find device "nvmf_tgt_br2" 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.074 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:30.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:30.333 00:19:30.333 --- 10.0.0.2 ping statistics --- 00:19:30.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.333 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:30.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:30.333 00:19:30.333 --- 10.0.0.3 ping statistics --- 00:19:30.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.333 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:30.333 00:19:30.333 --- 10.0.0.1 ping statistics --- 00:19:30.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.333 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.333 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90750 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90750 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90750 ']' 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.334 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.334 [2024-07-25 07:34:03.002432] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:30.334 [2024-07-25 07:34:03.002498] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.592 [2024-07-25 07:34:03.128300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.592 [2024-07-25 07:34:03.229855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.592 [2024-07-25 07:34:03.229894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.592 [2024-07-25 07:34:03.229901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.592 [2024-07-25 07:34:03.229906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.592 [2024-07-25 07:34:03.229910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.592 [2024-07-25 07:34:03.229931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.158 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.158 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:31.158 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:31.158 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.158 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.416 [2024-07-25 07:34:03.918568] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.416 [2024-07-25 07:34:03.926668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:31.416 null0 00:19:31.416 [2024-07-25 07:34:03.958528] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90800 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90800 /tmp/host.sock 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90800 ']' 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.416 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:31.416 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:31.417 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.417 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.417 [2024-07-25 07:34:04.033653] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:19:31.417 [2024-07-25 07:34:04.033715] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90800 ] 00:19:31.675 [2024-07-25 07:34:04.169472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.675 [2024-07-25 07:34:04.308712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.243 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.501 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.501 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:32.501 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.501 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.437 [2024-07-25 07:34:06.017239] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:33.437 [2024-07-25 07:34:06.017274] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:33.437 [2024-07-25 07:34:06.017291] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:33.437 [2024-07-25 07:34:06.104176] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:33.437 [2024-07-25 07:34:06.160875] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:33.437 [2024-07-25 07:34:06.160938] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:33.437 [2024-07-25 07:34:06.160962] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:33.437 [2024-07-25 07:34:06.160978] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:33.437 [2024-07-25 07:34:06.161002] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:33.437 [2024-07-25 07:34:06.165532] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf49650 was disconnected and freed. delete nvme_qpair. 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.437 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:33.695 07:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:34.635 07:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.014 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.951 07:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:37.889 07:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:38.833 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.093 [2024-07-25 07:34:11.578251] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:39.093 [2024-07-25 07:34:11.578325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.093 [2024-07-25 07:34:11.578336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.093 [2024-07-25 07:34:11.578347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.093 [2024-07-25 07:34:11.578353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.093 [2024-07-25 07:34:11.578360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.093 [2024-07-25 07:34:11.578366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.093 [2024-07-25 07:34:11.578372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.093 [2024-07-25 07:34:11.578377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.093 [2024-07-25 07:34:11.578383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.093 [2024-07-25 07:34:11.578405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.093 [2024-07-25 07:34:11.578411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf12900 is same with the state(5) to be set 00:19:39.093 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:39.093 07:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:39.093 [2024-07-25 07:34:11.588224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf12900 (9): Bad file descriptor 00:19:39.093 [2024-07-25 07:34:11.598231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:40.032 [2024-07-25 07:34:12.642211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:40.032 [2024-07-25 07:34:12.642359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf12900 with addr=10.0.0.2, port=4420 00:19:40.032 [2024-07-25 07:34:12.642403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf12900 is same with the state(5) to be set 00:19:40.032 [2024-07-25 07:34:12.642516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf12900 (9): Bad file descriptor 00:19:40.032 [2024-07-25 07:34:12.643793] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.032 [2024-07-25 07:34:12.643886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:40.032 [2024-07-25 07:34:12.643910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:40.032 [2024-07-25 07:34:12.643935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:40.032 [2024-07-25 07:34:12.644009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.032 [2024-07-25 07:34:12.644036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:40.032 07:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:40.971 [2024-07-25 07:34:13.642181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:40.971 [2024-07-25 07:34:13.642234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:40.971 [2024-07-25 07:34:13.642242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:40.971 [2024-07-25 07:34:13.642249] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:40.971 [2024-07-25 07:34:13.642281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.971 [2024-07-25 07:34:13.642311] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:40.971 [2024-07-25 07:34:13.642360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.971 [2024-07-25 07:34:13.642371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.971 [2024-07-25 07:34:13.642384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.971 [2024-07-25 07:34:13.642389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.971 [2024-07-25 07:34:13.642396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.971 [2024-07-25 07:34:13.642401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.971 [2024-07-25 07:34:13.642407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.971 [2024-07-25 07:34:13.642412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.971 [2024-07-25 07:34:13.642417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.971 [2024-07-25 07:34:13.642421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.971 [2024-07-25 07:34:13.642426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:40.971 [2024-07-25 07:34:13.642845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb53e0 (9): Bad file descriptor 00:19:40.971 [2024-07-25 07:34:13.643854] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:40.971 [2024-07-25 07:34:13.643872] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:40.971 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:41.231 07:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:42.170 07:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:43.107 [2024-07-25 07:34:15.649802] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:43.107 [2024-07-25 07:34:15.649845] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:43.107 [2024-07-25 07:34:15.649859] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:43.107 [2024-07-25 07:34:15.735730] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:43.107 [2024-07-25 07:34:15.791777] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:43.107 [2024-07-25 07:34:15.791827] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:43.107 [2024-07-25 07:34:15.791846] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:43.107 [2024-07-25 07:34:15.791862] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:43.107 [2024-07-25 07:34:15.791869] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:43.107 [2024-07-25 07:34:15.798178] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf2e390 was disconnected and freed. delete nvme_qpair. 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90800 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90800 ']' 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90800 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90800 00:19:43.367 killing process with pid 90800 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90800' 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90800 00:19:43.367 07:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90800 00:19:43.626 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:43.627 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.627 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:43.627 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.627 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:43.627 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.627 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.627 rmmod nvme_tcp 00:19:43.627 rmmod nvme_fabrics 00:19:43.627 rmmod nvme_keyring 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90750 ']' 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90750 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90750 ']' 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90750 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90750 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:43.886 killing process with pid 90750 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90750' 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90750 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90750 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.886 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:44.146 00:19:44.146 real 0m14.215s 00:19:44.146 user 0m25.426s 00:19:44.146 sys 0m1.580s 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.146 ************************************ 00:19:44.146 END TEST nvmf_discovery_remove_ifc 00:19:44.146 ************************************ 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.146 ************************************ 00:19:44.146 START TEST nvmf_identify_kernel_target 00:19:44.146 ************************************ 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:44.146 * Looking for test storage... 00:19:44.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.146 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.147 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:44.407 Cannot find device "nvmf_tgt_br" 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.407 Cannot find device "nvmf_tgt_br2" 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:44.407 Cannot find device "nvmf_tgt_br" 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:44.407 Cannot find device "nvmf_tgt_br2" 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:44.407 07:34:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:44.407 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:44.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:44.667 00:19:44.667 --- 10.0.0.2 ping statistics --- 00:19:44.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.667 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:44.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:19:44.667 00:19:44.667 --- 10.0.0.3 ping statistics --- 00:19:44.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.667 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:19:44.667 00:19:44.667 --- 10.0.0.1 ping statistics --- 00:19:44.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.667 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:44.667 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:44.668 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:45.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.236 Waiting for block devices as requested 00:19:45.236 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:45.236 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:45.494 07:34:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:45.494 No valid GPT data, bailing 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n2 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:45.494 No valid GPT data, bailing 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n3 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:45.494 No valid GPT data, bailing 00:19:45.494 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:45.495 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:45.756 No valid GPT data, bailing 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -a 10.0.0.1 -t tcp -s 4420 00:19:45.756 00:19:45.756 Discovery Log Number of Records 2, Generation counter 2 00:19:45.756 =====Discovery Log Entry 0====== 00:19:45.756 trtype: tcp 00:19:45.756 adrfam: ipv4 00:19:45.756 subtype: current discovery subsystem 00:19:45.756 treq: not specified, sq flow control disable supported 00:19:45.756 portid: 1 00:19:45.756 trsvcid: 4420 00:19:45.756 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:45.756 traddr: 10.0.0.1 00:19:45.756 eflags: none 00:19:45.756 sectype: none 00:19:45.756 =====Discovery Log Entry 1====== 00:19:45.756 trtype: tcp 00:19:45.756 adrfam: ipv4 00:19:45.756 subtype: nvme subsystem 00:19:45.756 treq: not specified, sq flow control disable supported 00:19:45.756 portid: 1 00:19:45.756 trsvcid: 4420 00:19:45.756 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:45.756 traddr: 10.0.0.1 00:19:45.756 eflags: none 00:19:45.756 sectype: none 00:19:45.756 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:45.756 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:46.017 ===================================================== 00:19:46.017 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:46.017 ===================================================== 00:19:46.017 Controller Capabilities/Features 00:19:46.017 ================================ 00:19:46.017 Vendor ID: 0000 00:19:46.017 Subsystem Vendor ID: 0000 00:19:46.017 Serial Number: 1fac187ec6f1bb17ecb5 00:19:46.017 Model Number: Linux 00:19:46.017 Firmware Version: 6.7.0-68 00:19:46.017 Recommended Arb Burst: 0 00:19:46.017 IEEE OUI Identifier: 00 00 00 00:19:46.017 Multi-path I/O 00:19:46.017 May have multiple subsystem ports: No 00:19:46.017 May have multiple controllers: No 00:19:46.017 Associated with SR-IOV VF: No 00:19:46.017 Max Data Transfer Size: Unlimited 00:19:46.017 Max Number of Namespaces: 0 00:19:46.017 Max Number of I/O Queues: 1024 00:19:46.017 NVMe Specification Version (VS): 1.3 00:19:46.017 NVMe Specification Version (Identify): 1.3 00:19:46.017 Maximum Queue Entries: 1024 00:19:46.017 Contiguous Queues Required: No 00:19:46.017 Arbitration Mechanisms Supported 00:19:46.017 Weighted Round Robin: Not Supported 00:19:46.017 Vendor Specific: Not Supported 00:19:46.017 Reset Timeout: 7500 ms 00:19:46.017 Doorbell Stride: 4 bytes 00:19:46.017 NVM Subsystem Reset: Not Supported 00:19:46.017 Command Sets Supported 00:19:46.017 NVM Command Set: Supported 00:19:46.017 Boot Partition: Not Supported 00:19:46.017 Memory Page Size Minimum: 4096 bytes 00:19:46.017 Memory Page Size Maximum: 4096 bytes 00:19:46.017 Persistent Memory Region: Not Supported 00:19:46.017 Optional Asynchronous Events Supported 00:19:46.017 Namespace Attribute Notices: Not Supported 00:19:46.017 Firmware Activation Notices: Not Supported 00:19:46.017 ANA Change Notices: Not Supported 00:19:46.017 PLE Aggregate Log Change Notices: Not Supported 00:19:46.017 LBA Status Info Alert Notices: Not Supported 00:19:46.017 EGE Aggregate Log Change Notices: Not Supported 00:19:46.017 Normal NVM Subsystem Shutdown event: Not Supported 00:19:46.017 Zone Descriptor Change Notices: Not Supported 00:19:46.017 Discovery Log Change Notices: Supported 00:19:46.017 Controller Attributes 00:19:46.017 128-bit Host Identifier: Not Supported 00:19:46.017 Non-Operational Permissive Mode: Not Supported 00:19:46.017 NVM Sets: Not Supported 00:19:46.017 Read Recovery Levels: Not Supported 00:19:46.017 Endurance Groups: Not Supported 00:19:46.017 Predictable Latency Mode: Not Supported 00:19:46.017 Traffic Based Keep ALive: Not Supported 00:19:46.017 Namespace Granularity: Not Supported 00:19:46.017 SQ Associations: Not Supported 00:19:46.017 UUID List: Not Supported 00:19:46.017 Multi-Domain Subsystem: Not Supported 00:19:46.017 Fixed Capacity Management: Not Supported 00:19:46.017 Variable Capacity Management: Not Supported 00:19:46.017 Delete Endurance Group: Not Supported 00:19:46.017 Delete NVM Set: Not Supported 00:19:46.017 Extended LBA Formats Supported: Not Supported 00:19:46.017 Flexible Data Placement Supported: Not Supported 00:19:46.017 00:19:46.017 Controller Memory Buffer Support 00:19:46.017 ================================ 00:19:46.017 Supported: No 00:19:46.017 00:19:46.017 Persistent Memory Region Support 00:19:46.017 ================================ 00:19:46.017 Supported: No 00:19:46.017 00:19:46.017 Admin Command Set Attributes 00:19:46.017 ============================ 00:19:46.017 Security Send/Receive: Not Supported 00:19:46.017 Format NVM: Not Supported 00:19:46.017 Firmware Activate/Download: Not Supported 00:19:46.017 Namespace Management: Not Supported 00:19:46.017 Device Self-Test: Not Supported 00:19:46.017 Directives: Not Supported 00:19:46.017 NVMe-MI: Not Supported 00:19:46.017 Virtualization Management: Not Supported 00:19:46.017 Doorbell Buffer Config: Not Supported 00:19:46.017 Get LBA Status Capability: Not Supported 00:19:46.017 Command & Feature Lockdown Capability: Not Supported 00:19:46.017 Abort Command Limit: 1 00:19:46.017 Async Event Request Limit: 1 00:19:46.017 Number of Firmware Slots: N/A 00:19:46.017 Firmware Slot 1 Read-Only: N/A 00:19:46.017 Firmware Activation Without Reset: N/A 00:19:46.017 Multiple Update Detection Support: N/A 00:19:46.017 Firmware Update Granularity: No Information Provided 00:19:46.017 Per-Namespace SMART Log: No 00:19:46.017 Asymmetric Namespace Access Log Page: Not Supported 00:19:46.017 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:46.018 Command Effects Log Page: Not Supported 00:19:46.018 Get Log Page Extended Data: Supported 00:19:46.018 Telemetry Log Pages: Not Supported 00:19:46.018 Persistent Event Log Pages: Not Supported 00:19:46.018 Supported Log Pages Log Page: May Support 00:19:46.018 Commands Supported & Effects Log Page: Not Supported 00:19:46.018 Feature Identifiers & Effects Log Page:May Support 00:19:46.018 NVMe-MI Commands & Effects Log Page: May Support 00:19:46.018 Data Area 4 for Telemetry Log: Not Supported 00:19:46.018 Error Log Page Entries Supported: 1 00:19:46.018 Keep Alive: Not Supported 00:19:46.018 00:19:46.018 NVM Command Set Attributes 00:19:46.018 ========================== 00:19:46.018 Submission Queue Entry Size 00:19:46.018 Max: 1 00:19:46.018 Min: 1 00:19:46.018 Completion Queue Entry Size 00:19:46.018 Max: 1 00:19:46.018 Min: 1 00:19:46.018 Number of Namespaces: 0 00:19:46.018 Compare Command: Not Supported 00:19:46.018 Write Uncorrectable Command: Not Supported 00:19:46.018 Dataset Management Command: Not Supported 00:19:46.018 Write Zeroes Command: Not Supported 00:19:46.018 Set Features Save Field: Not Supported 00:19:46.018 Reservations: Not Supported 00:19:46.018 Timestamp: Not Supported 00:19:46.018 Copy: Not Supported 00:19:46.018 Volatile Write Cache: Not Present 00:19:46.018 Atomic Write Unit (Normal): 1 00:19:46.018 Atomic Write Unit (PFail): 1 00:19:46.018 Atomic Compare & Write Unit: 1 00:19:46.018 Fused Compare & Write: Not Supported 00:19:46.018 Scatter-Gather List 00:19:46.018 SGL Command Set: Supported 00:19:46.018 SGL Keyed: Not Supported 00:19:46.018 SGL Bit Bucket Descriptor: Not Supported 00:19:46.018 SGL Metadata Pointer: Not Supported 00:19:46.018 Oversized SGL: Not Supported 00:19:46.018 SGL Metadata Address: Not Supported 00:19:46.018 SGL Offset: Supported 00:19:46.018 Transport SGL Data Block: Not Supported 00:19:46.018 Replay Protected Memory Block: Not Supported 00:19:46.018 00:19:46.018 Firmware Slot Information 00:19:46.018 ========================= 00:19:46.018 Active slot: 0 00:19:46.018 00:19:46.018 00:19:46.018 Error Log 00:19:46.018 ========= 00:19:46.018 00:19:46.018 Active Namespaces 00:19:46.018 ================= 00:19:46.018 Discovery Log Page 00:19:46.018 ================== 00:19:46.018 Generation Counter: 2 00:19:46.018 Number of Records: 2 00:19:46.018 Record Format: 0 00:19:46.018 00:19:46.018 Discovery Log Entry 0 00:19:46.018 ---------------------- 00:19:46.018 Transport Type: 3 (TCP) 00:19:46.018 Address Family: 1 (IPv4) 00:19:46.018 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:46.018 Entry Flags: 00:19:46.018 Duplicate Returned Information: 0 00:19:46.018 Explicit Persistent Connection Support for Discovery: 0 00:19:46.018 Transport Requirements: 00:19:46.018 Secure Channel: Not Specified 00:19:46.018 Port ID: 1 (0x0001) 00:19:46.018 Controller ID: 65535 (0xffff) 00:19:46.018 Admin Max SQ Size: 32 00:19:46.018 Transport Service Identifier: 4420 00:19:46.018 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:46.018 Transport Address: 10.0.0.1 00:19:46.018 Discovery Log Entry 1 00:19:46.018 ---------------------- 00:19:46.018 Transport Type: 3 (TCP) 00:19:46.018 Address Family: 1 (IPv4) 00:19:46.018 Subsystem Type: 2 (NVM Subsystem) 00:19:46.018 Entry Flags: 00:19:46.018 Duplicate Returned Information: 0 00:19:46.018 Explicit Persistent Connection Support for Discovery: 0 00:19:46.018 Transport Requirements: 00:19:46.018 Secure Channel: Not Specified 00:19:46.018 Port ID: 1 (0x0001) 00:19:46.018 Controller ID: 65535 (0xffff) 00:19:46.018 Admin Max SQ Size: 32 00:19:46.018 Transport Service Identifier: 4420 00:19:46.018 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:46.018 Transport Address: 10.0.0.1 00:19:46.018 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:46.018 get_feature(0x01) failed 00:19:46.018 get_feature(0x02) failed 00:19:46.018 get_feature(0x04) failed 00:19:46.018 ===================================================== 00:19:46.018 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:46.018 ===================================================== 00:19:46.018 Controller Capabilities/Features 00:19:46.018 ================================ 00:19:46.018 Vendor ID: 0000 00:19:46.018 Subsystem Vendor ID: 0000 00:19:46.018 Serial Number: 727db24789017e3b1775 00:19:46.018 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:46.018 Firmware Version: 6.7.0-68 00:19:46.018 Recommended Arb Burst: 6 00:19:46.018 IEEE OUI Identifier: 00 00 00 00:19:46.018 Multi-path I/O 00:19:46.018 May have multiple subsystem ports: Yes 00:19:46.018 May have multiple controllers: Yes 00:19:46.018 Associated with SR-IOV VF: No 00:19:46.018 Max Data Transfer Size: Unlimited 00:19:46.018 Max Number of Namespaces: 1024 00:19:46.018 Max Number of I/O Queues: 128 00:19:46.018 NVMe Specification Version (VS): 1.3 00:19:46.018 NVMe Specification Version (Identify): 1.3 00:19:46.018 Maximum Queue Entries: 1024 00:19:46.018 Contiguous Queues Required: No 00:19:46.018 Arbitration Mechanisms Supported 00:19:46.018 Weighted Round Robin: Not Supported 00:19:46.018 Vendor Specific: Not Supported 00:19:46.018 Reset Timeout: 7500 ms 00:19:46.018 Doorbell Stride: 4 bytes 00:19:46.018 NVM Subsystem Reset: Not Supported 00:19:46.018 Command Sets Supported 00:19:46.018 NVM Command Set: Supported 00:19:46.018 Boot Partition: Not Supported 00:19:46.018 Memory Page Size Minimum: 4096 bytes 00:19:46.018 Memory Page Size Maximum: 4096 bytes 00:19:46.018 Persistent Memory Region: Not Supported 00:19:46.018 Optional Asynchronous Events Supported 00:19:46.018 Namespace Attribute Notices: Supported 00:19:46.018 Firmware Activation Notices: Not Supported 00:19:46.018 ANA Change Notices: Supported 00:19:46.018 PLE Aggregate Log Change Notices: Not Supported 00:19:46.018 LBA Status Info Alert Notices: Not Supported 00:19:46.018 EGE Aggregate Log Change Notices: Not Supported 00:19:46.018 Normal NVM Subsystem Shutdown event: Not Supported 00:19:46.018 Zone Descriptor Change Notices: Not Supported 00:19:46.018 Discovery Log Change Notices: Not Supported 00:19:46.018 Controller Attributes 00:19:46.018 128-bit Host Identifier: Supported 00:19:46.018 Non-Operational Permissive Mode: Not Supported 00:19:46.018 NVM Sets: Not Supported 00:19:46.018 Read Recovery Levels: Not Supported 00:19:46.018 Endurance Groups: Not Supported 00:19:46.018 Predictable Latency Mode: Not Supported 00:19:46.018 Traffic Based Keep ALive: Supported 00:19:46.018 Namespace Granularity: Not Supported 00:19:46.018 SQ Associations: Not Supported 00:19:46.018 UUID List: Not Supported 00:19:46.018 Multi-Domain Subsystem: Not Supported 00:19:46.018 Fixed Capacity Management: Not Supported 00:19:46.018 Variable Capacity Management: Not Supported 00:19:46.018 Delete Endurance Group: Not Supported 00:19:46.018 Delete NVM Set: Not Supported 00:19:46.018 Extended LBA Formats Supported: Not Supported 00:19:46.018 Flexible Data Placement Supported: Not Supported 00:19:46.018 00:19:46.018 Controller Memory Buffer Support 00:19:46.018 ================================ 00:19:46.018 Supported: No 00:19:46.018 00:19:46.018 Persistent Memory Region Support 00:19:46.018 ================================ 00:19:46.018 Supported: No 00:19:46.018 00:19:46.018 Admin Command Set Attributes 00:19:46.018 ============================ 00:19:46.018 Security Send/Receive: Not Supported 00:19:46.018 Format NVM: Not Supported 00:19:46.018 Firmware Activate/Download: Not Supported 00:19:46.018 Namespace Management: Not Supported 00:19:46.018 Device Self-Test: Not Supported 00:19:46.018 Directives: Not Supported 00:19:46.018 NVMe-MI: Not Supported 00:19:46.018 Virtualization Management: Not Supported 00:19:46.018 Doorbell Buffer Config: Not Supported 00:19:46.018 Get LBA Status Capability: Not Supported 00:19:46.018 Command & Feature Lockdown Capability: Not Supported 00:19:46.018 Abort Command Limit: 4 00:19:46.019 Async Event Request Limit: 4 00:19:46.019 Number of Firmware Slots: N/A 00:19:46.019 Firmware Slot 1 Read-Only: N/A 00:19:46.019 Firmware Activation Without Reset: N/A 00:19:46.019 Multiple Update Detection Support: N/A 00:19:46.019 Firmware Update Granularity: No Information Provided 00:19:46.019 Per-Namespace SMART Log: Yes 00:19:46.019 Asymmetric Namespace Access Log Page: Supported 00:19:46.019 ANA Transition Time : 10 sec 00:19:46.019 00:19:46.019 Asymmetric Namespace Access Capabilities 00:19:46.019 ANA Optimized State : Supported 00:19:46.019 ANA Non-Optimized State : Supported 00:19:46.019 ANA Inaccessible State : Supported 00:19:46.019 ANA Persistent Loss State : Supported 00:19:46.019 ANA Change State : Supported 00:19:46.019 ANAGRPID is not changed : No 00:19:46.019 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:46.019 00:19:46.019 ANA Group Identifier Maximum : 128 00:19:46.019 Number of ANA Group Identifiers : 128 00:19:46.019 Max Number of Allowed Namespaces : 1024 00:19:46.019 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:46.019 Command Effects Log Page: Supported 00:19:46.019 Get Log Page Extended Data: Supported 00:19:46.019 Telemetry Log Pages: Not Supported 00:19:46.019 Persistent Event Log Pages: Not Supported 00:19:46.019 Supported Log Pages Log Page: May Support 00:19:46.019 Commands Supported & Effects Log Page: Not Supported 00:19:46.019 Feature Identifiers & Effects Log Page:May Support 00:19:46.019 NVMe-MI Commands & Effects Log Page: May Support 00:19:46.019 Data Area 4 for Telemetry Log: Not Supported 00:19:46.019 Error Log Page Entries Supported: 128 00:19:46.019 Keep Alive: Supported 00:19:46.019 Keep Alive Granularity: 1000 ms 00:19:46.019 00:19:46.019 NVM Command Set Attributes 00:19:46.019 ========================== 00:19:46.019 Submission Queue Entry Size 00:19:46.019 Max: 64 00:19:46.019 Min: 64 00:19:46.019 Completion Queue Entry Size 00:19:46.019 Max: 16 00:19:46.019 Min: 16 00:19:46.019 Number of Namespaces: 1024 00:19:46.019 Compare Command: Not Supported 00:19:46.019 Write Uncorrectable Command: Not Supported 00:19:46.019 Dataset Management Command: Supported 00:19:46.019 Write Zeroes Command: Supported 00:19:46.019 Set Features Save Field: Not Supported 00:19:46.019 Reservations: Not Supported 00:19:46.019 Timestamp: Not Supported 00:19:46.019 Copy: Not Supported 00:19:46.019 Volatile Write Cache: Present 00:19:46.019 Atomic Write Unit (Normal): 1 00:19:46.019 Atomic Write Unit (PFail): 1 00:19:46.019 Atomic Compare & Write Unit: 1 00:19:46.019 Fused Compare & Write: Not Supported 00:19:46.019 Scatter-Gather List 00:19:46.019 SGL Command Set: Supported 00:19:46.019 SGL Keyed: Not Supported 00:19:46.019 SGL Bit Bucket Descriptor: Not Supported 00:19:46.019 SGL Metadata Pointer: Not Supported 00:19:46.019 Oversized SGL: Not Supported 00:19:46.019 SGL Metadata Address: Not Supported 00:19:46.019 SGL Offset: Supported 00:19:46.019 Transport SGL Data Block: Not Supported 00:19:46.019 Replay Protected Memory Block: Not Supported 00:19:46.019 00:19:46.019 Firmware Slot Information 00:19:46.019 ========================= 00:19:46.019 Active slot: 0 00:19:46.019 00:19:46.019 Asymmetric Namespace Access 00:19:46.019 =========================== 00:19:46.019 Change Count : 0 00:19:46.019 Number of ANA Group Descriptors : 1 00:19:46.019 ANA Group Descriptor : 0 00:19:46.019 ANA Group ID : 1 00:19:46.019 Number of NSID Values : 1 00:19:46.019 Change Count : 0 00:19:46.019 ANA State : 1 00:19:46.019 Namespace Identifier : 1 00:19:46.019 00:19:46.019 Commands Supported and Effects 00:19:46.019 ============================== 00:19:46.019 Admin Commands 00:19:46.019 -------------- 00:19:46.019 Get Log Page (02h): Supported 00:19:46.019 Identify (06h): Supported 00:19:46.019 Abort (08h): Supported 00:19:46.019 Set Features (09h): Supported 00:19:46.019 Get Features (0Ah): Supported 00:19:46.019 Asynchronous Event Request (0Ch): Supported 00:19:46.019 Keep Alive (18h): Supported 00:19:46.019 I/O Commands 00:19:46.019 ------------ 00:19:46.019 Flush (00h): Supported 00:19:46.019 Write (01h): Supported LBA-Change 00:19:46.019 Read (02h): Supported 00:19:46.019 Write Zeroes (08h): Supported LBA-Change 00:19:46.019 Dataset Management (09h): Supported 00:19:46.019 00:19:46.019 Error Log 00:19:46.019 ========= 00:19:46.019 Entry: 0 00:19:46.019 Error Count: 0x3 00:19:46.019 Submission Queue Id: 0x0 00:19:46.019 Command Id: 0x5 00:19:46.019 Phase Bit: 0 00:19:46.019 Status Code: 0x2 00:19:46.019 Status Code Type: 0x0 00:19:46.019 Do Not Retry: 1 00:19:46.019 Error Location: 0x28 00:19:46.019 LBA: 0x0 00:19:46.019 Namespace: 0x0 00:19:46.019 Vendor Log Page: 0x0 00:19:46.019 ----------- 00:19:46.019 Entry: 1 00:19:46.019 Error Count: 0x2 00:19:46.019 Submission Queue Id: 0x0 00:19:46.019 Command Id: 0x5 00:19:46.019 Phase Bit: 0 00:19:46.019 Status Code: 0x2 00:19:46.019 Status Code Type: 0x0 00:19:46.019 Do Not Retry: 1 00:19:46.019 Error Location: 0x28 00:19:46.019 LBA: 0x0 00:19:46.019 Namespace: 0x0 00:19:46.019 Vendor Log Page: 0x0 00:19:46.019 ----------- 00:19:46.019 Entry: 2 00:19:46.019 Error Count: 0x1 00:19:46.019 Submission Queue Id: 0x0 00:19:46.019 Command Id: 0x4 00:19:46.019 Phase Bit: 0 00:19:46.019 Status Code: 0x2 00:19:46.019 Status Code Type: 0x0 00:19:46.019 Do Not Retry: 1 00:19:46.019 Error Location: 0x28 00:19:46.019 LBA: 0x0 00:19:46.019 Namespace: 0x0 00:19:46.019 Vendor Log Page: 0x0 00:19:46.019 00:19:46.019 Number of Queues 00:19:46.019 ================ 00:19:46.019 Number of I/O Submission Queues: 128 00:19:46.019 Number of I/O Completion Queues: 128 00:19:46.019 00:19:46.019 ZNS Specific Controller Data 00:19:46.019 ============================ 00:19:46.019 Zone Append Size Limit: 0 00:19:46.019 00:19:46.019 00:19:46.019 Active Namespaces 00:19:46.019 ================= 00:19:46.019 get_feature(0x05) failed 00:19:46.019 Namespace ID:1 00:19:46.019 Command Set Identifier: NVM (00h) 00:19:46.019 Deallocate: Supported 00:19:46.019 Deallocated/Unwritten Error: Not Supported 00:19:46.019 Deallocated Read Value: Unknown 00:19:46.019 Deallocate in Write Zeroes: Not Supported 00:19:46.019 Deallocated Guard Field: 0xFFFF 00:19:46.019 Flush: Supported 00:19:46.019 Reservation: Not Supported 00:19:46.019 Namespace Sharing Capabilities: Multiple Controllers 00:19:46.019 Size (in LBAs): 1310720 (5GiB) 00:19:46.019 Capacity (in LBAs): 1310720 (5GiB) 00:19:46.019 Utilization (in LBAs): 1310720 (5GiB) 00:19:46.019 UUID: 9b5a1e39-8341-49dc-9706-7ffa387247da 00:19:46.019 Thin Provisioning: Not Supported 00:19:46.019 Per-NS Atomic Units: Yes 00:19:46.019 Atomic Boundary Size (Normal): 0 00:19:46.019 Atomic Boundary Size (PFail): 0 00:19:46.019 Atomic Boundary Offset: 0 00:19:46.019 NGUID/EUI64 Never Reused: No 00:19:46.019 ANA group ID: 1 00:19:46.019 Namespace Write Protected: No 00:19:46.019 Number of LBA Formats: 1 00:19:46.019 Current LBA Format: LBA Format #00 00:19:46.019 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:46.019 00:19:46.019 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:46.019 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.019 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.279 rmmod nvme_tcp 00:19:46.279 rmmod nvme_fabrics 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.279 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:46.280 07:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:47.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:47.216 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:47.216 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:47.216 00:19:47.216 real 0m3.219s 00:19:47.216 user 0m1.083s 00:19:47.216 sys 0m1.700s 00:19:47.216 07:34:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.216 07:34:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.216 ************************************ 00:19:47.216 END TEST nvmf_identify_kernel_target 00:19:47.216 ************************************ 00:19:47.476 07:34:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:47.476 07:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:47.476 07:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.476 07:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.476 ************************************ 00:19:47.476 START TEST nvmf_auth_host 00:19:47.476 ************************************ 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:47.476 * Looking for test storage... 00:19:47.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.476 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:47.477 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:47.736 Cannot find device "nvmf_tgt_br" 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.736 Cannot find device "nvmf_tgt_br2" 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:47.736 Cannot find device "nvmf_tgt_br" 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:47.736 Cannot find device "nvmf_tgt_br2" 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:47.736 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:47.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:19:47.996 00:19:47.996 --- 10.0.0.2 ping statistics --- 00:19:47.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.996 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:47.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:19:47.996 00:19:47.996 --- 10.0.0.3 ping statistics --- 00:19:47.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.996 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:47.996 00:19:47.996 --- 10.0.0.1 ping statistics --- 00:19:47.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.996 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91716 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91716 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91716 ']' 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.996 07:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2fcf37568525af4238fff86dcd1a3f35 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zDT 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2fcf37568525af4238fff86dcd1a3f35 0 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2fcf37568525af4238fff86dcd1a3f35 0 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2fcf37568525af4238fff86dcd1a3f35 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zDT 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zDT 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zDT 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=296df9bc6903ac7bbd7a9d685cdfddc217fbb981e7d7e596dc705f6212ea3884 00:19:48.936 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zDG 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 296df9bc6903ac7bbd7a9d685cdfddc217fbb981e7d7e596dc705f6212ea3884 3 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 296df9bc6903ac7bbd7a9d685cdfddc217fbb981e7d7e596dc705f6212ea3884 3 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=296df9bc6903ac7bbd7a9d685cdfddc217fbb981e7d7e596dc705f6212ea3884 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zDG 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zDG 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.zDG 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8ac7ad8ec0def407ff9d3d82b78ddfdbae393399c99a8ac8 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V86 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8ac7ad8ec0def407ff9d3d82b78ddfdbae393399c99a8ac8 0 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8ac7ad8ec0def407ff9d3d82b78ddfdbae393399c99a8ac8 0 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8ac7ad8ec0def407ff9d3d82b78ddfdbae393399c99a8ac8 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V86 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V86 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.V86 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.196 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a6ef0fce63acd23a2d04122380258d19003dc5fea7e26f50 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aoM 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a6ef0fce63acd23a2d04122380258d19003dc5fea7e26f50 2 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a6ef0fce63acd23a2d04122380258d19003dc5fea7e26f50 2 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a6ef0fce63acd23a2d04122380258d19003dc5fea7e26f50 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aoM 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aoM 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.aoM 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df6431fa1a85010cbe374acbf56519ca 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NR0 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df6431fa1a85010cbe374acbf56519ca 1 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df6431fa1a85010cbe374acbf56519ca 1 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df6431fa1a85010cbe374acbf56519ca 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:49.197 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NR0 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NR0 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.NR0 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93d4f8a7172aaaf08cac52acd6e59f37 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fAh 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93d4f8a7172aaaf08cac52acd6e59f37 1 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93d4f8a7172aaaf08cac52acd6e59f37 1 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93d4f8a7172aaaf08cac52acd6e59f37 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:49.457 07:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fAh 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fAh 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fAh 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4bda25a8e7f9887fb555e6e8509ca6f1de2ea245114f4e7e 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CSI 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4bda25a8e7f9887fb555e6e8509ca6f1de2ea245114f4e7e 2 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4bda25a8e7f9887fb555e6e8509ca6f1de2ea245114f4e7e 2 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4bda25a8e7f9887fb555e6e8509ca6f1de2ea245114f4e7e 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CSI 00:19:49.457 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CSI 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CSI 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa7457a68c8b606f56960869969d89c9 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jsK 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa7457a68c8b606f56960869969d89c9 0 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa7457a68c8b606f56960869969d89c9 0 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa7457a68c8b606f56960869969d89c9 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jsK 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jsK 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jsK 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87816078b6eddef9cd5c8940f7e9bf248768b1f8a09be995bbcc48dfdf71ceb6 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:49.458 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dMb 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87816078b6eddef9cd5c8940f7e9bf248768b1f8a09be995bbcc48dfdf71ceb6 3 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87816078b6eddef9cd5c8940f7e9bf248768b1f8a09be995bbcc48dfdf71ceb6 3 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87816078b6eddef9cd5c8940f7e9bf248768b1f8a09be995bbcc48dfdf71ceb6 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dMb 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dMb 00:19:49.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dMb 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91716 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91716 ']' 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zDT 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.zDG ]] 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zDG 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.V86 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.aoM ]] 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aoM 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.718 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.NR0 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fAh ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fAh 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CSI 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jsK ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jsK 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dMb 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:49.978 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:49.979 07:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:50.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:50.546 Waiting for block devices as requested 00:19:50.546 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:50.546 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:51.485 07:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:51.485 No valid GPT data, bailing 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n2 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:51.485 No valid GPT data, bailing 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n3 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:51.485 No valid GPT data, bailing 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:51.485 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:51.746 No valid GPT data, bailing 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -a 10.0.0.1 -t tcp -s 4420 00:19:51.746 00:19:51.746 Discovery Log Number of Records 2, Generation counter 2 00:19:51.746 =====Discovery Log Entry 0====== 00:19:51.746 trtype: tcp 00:19:51.746 adrfam: ipv4 00:19:51.746 subtype: current discovery subsystem 00:19:51.746 treq: not specified, sq flow control disable supported 00:19:51.746 portid: 1 00:19:51.746 trsvcid: 4420 00:19:51.746 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:51.746 traddr: 10.0.0.1 00:19:51.746 eflags: none 00:19:51.746 sectype: none 00:19:51.746 =====Discovery Log Entry 1====== 00:19:51.746 trtype: tcp 00:19:51.746 adrfam: ipv4 00:19:51.746 subtype: nvme subsystem 00:19:51.746 treq: not specified, sq flow control disable supported 00:19:51.746 portid: 1 00:19:51.746 trsvcid: 4420 00:19:51.746 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:51.746 traddr: 10.0.0.1 00:19:51.746 eflags: none 00:19:51.746 sectype: none 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:51.746 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.007 nvme0n1 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.007 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.267 nvme0n1 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.267 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.268 nvme0n1 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.268 07:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 nvme0n1 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.528 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 nvme0n1 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.788 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.789 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.789 nvme0n1 00:19:52.789 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.789 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.789 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.789 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.789 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.048 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.048 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.048 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.048 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.048 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.049 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 nvme0n1 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.309 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.310 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.310 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.310 07:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.569 nvme0n1 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:53.569 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.570 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.829 nvme0n1 00:19:53.829 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.829 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.829 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.829 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.829 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.829 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.830 nvme0n1 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.830 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.090 nvme0n1 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:54.090 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:54.091 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.091 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:54.675 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:54.675 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.676 nvme0n1 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.676 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.935 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.936 nvme0n1 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.936 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.195 nvme0n1 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.195 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.455 07:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.455 nvme0n1 00:19:55.455 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.455 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.455 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.455 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.455 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.455 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.715 nvme0n1 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.715 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.716 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.716 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.716 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.716 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.716 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.975 07:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.356 07:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.356 nvme0n1 00:19:57.356 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.356 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.356 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.356 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.356 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.356 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.616 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.876 nvme0n1 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.876 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.137 nvme0n1 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.137 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.397 07:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.657 nvme0n1 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.657 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.918 nvme0n1 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.918 07:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.487 nvme0n1 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.487 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.056 nvme0n1 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.056 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.057 07:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 nvme0n1 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.627 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.196 nvme0n1 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:01.196 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.197 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.766 nvme0n1 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.766 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.767 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.027 nvme0n1 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.027 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.028 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.288 nvme0n1 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.288 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.289 nvme0n1 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.289 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.289 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.550 nvme0n1 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.550 nvme0n1 00:20:02.550 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.810 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.811 nvme0n1 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.811 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.071 nvme0n1 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.071 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.072 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.332 nvme0n1 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.332 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 nvme0n1 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 nvme0n1 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:03.851 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.852 nvme0n1 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.852 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:04.110 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.111 nvme0n1 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.111 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.370 07:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.370 nvme0n1 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.370 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 nvme0n1 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:04.630 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.889 nvme0n1 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.889 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.148 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.408 nvme0n1 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.408 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.409 07:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.409 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.672 nvme0n1 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.672 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.673 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.940 nvme0n1 00:20:05.940 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.205 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.205 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.205 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.205 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.205 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.206 07:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.466 nvme0n1 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.466 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.725 nvme0n1 00:20:06.725 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.725 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.725 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.725 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.725 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.725 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.984 07:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.553 nvme0n1 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.553 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.554 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.140 nvme0n1 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.140 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.141 07:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.710 nvme0n1 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.710 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.279 nvme0n1 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.279 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.280 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.280 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.848 nvme0n1 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.848 nvme0n1 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.848 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.849 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.109 nvme0n1 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.109 nvme0n1 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.109 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.369 nvme0n1 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.369 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.369 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.370 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.630 nvme0n1 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.630 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.631 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.631 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.631 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.631 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.631 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.631 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.890 nvme0n1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.890 nvme0n1 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.890 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.149 nvme0n1 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.149 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.408 nvme0n1 00:20:11.408 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.408 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.408 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.408 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.408 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.409 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.409 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.409 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.409 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 nvme0n1 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.409 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.668 nvme0n1 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.668 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.928 nvme0n1 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.928 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.188 nvme0n1 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.188 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.448 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.449 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.449 nvme0n1 00:20:12.449 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.449 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.449 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.449 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.449 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.449 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.709 nvme0n1 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.709 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.969 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.229 nvme0n1 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:13.229 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.230 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.490 nvme0n1 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.490 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.491 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.750 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.010 nvme0n1 00:20:14.010 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.010 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.010 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.010 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.010 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.010 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.011 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.271 nvme0n1 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.271 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.272 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.531 nvme0n1 00:20:14.531 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmZjZjM3NTY4NTI1YWY0MjM4ZmZmODZkY2QxYTNmMzXnY9sa: 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjk2ZGY5YmM2OTAzYWM3YmJkN2E5ZDY4NWNkZmRkYzIxN2ZiYjk4MWU3ZDdlNTk2ZGM3MDVmNjIxMmVhMzg4NBfMYN8=: 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.791 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.361 nvme0n1 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.361 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.927 nvme0n1 00:20:15.927 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.927 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.927 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.927 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.927 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.927 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY2NDMxZmExYTg1MDEwY2JlMzc0YWNiZjU2NTE5Y2EBhhGd: 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTNkNGY4YTcxNzJhYWFmMDhjYWM1MmFjZDZlNTlmMzcxsqV6: 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.928 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.497 nvme0n1 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJkYTI1YThlN2Y5ODg3ZmI1NTVlNmU4NTA5Y2E2ZjFkZTJlYTI0NTExNGY0ZTdlCFvM9A==: 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWE3NDU3YTY4YzhiNjA2ZjU2OTYwODY5OTY5ZDg5Yzm3WMzf: 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.497 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.068 nvme0n1 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc4MTYwNzhiNmVkZGVmOWNkNWM4OTQwZjdlOWJmMjQ4NzY4YjFmOGEwOWJlOTk1YmJjYzQ4ZGZkZjcxY2ViNqCxFiU=: 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.069 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.328 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.588 nvme0n1 00:20:17.588 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.588 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.588 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.588 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.588 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.588 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFjN2FkOGVjMGRlZjQwN2ZmOWQzZDgyYjc4ZGRmZGJhZTM5MzM5OWM5OWE4YWM4DqjxiQ==: 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZlZjBmY2U2M2FjZDIzYTJkMDQxMjIzODAyNThkMTkwMDNkYzVmZWE3ZTI2ZjUwopSMzw==: 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.848 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.849 2024/07/25 07:34:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:17.849 request: 00:20:17.849 { 00:20:17.849 "method": "bdev_nvme_attach_controller", 00:20:17.849 "params": { 00:20:17.849 "name": "nvme0", 00:20:17.849 "trtype": "tcp", 00:20:17.849 "traddr": "10.0.0.1", 00:20:17.849 "adrfam": "ipv4", 00:20:17.849 "trsvcid": "4420", 00:20:17.849 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:17.849 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:17.849 "prchk_reftag": false, 00:20:17.849 "prchk_guard": false, 00:20:17.849 "hdgst": false, 00:20:17.849 "ddgst": false 00:20:17.849 } 00:20:17.849 } 00:20:17.849 Got JSON-RPC error response 00:20:17.849 GoRPCClient: error on JSON-RPC call 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.849 2024/07/25 07:34:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:17.849 request: 00:20:17.849 { 00:20:17.849 "method": "bdev_nvme_attach_controller", 00:20:17.849 "params": { 00:20:17.849 "name": "nvme0", 00:20:17.849 "trtype": "tcp", 00:20:17.849 "traddr": "10.0.0.1", 00:20:17.849 "adrfam": "ipv4", 00:20:17.849 "trsvcid": "4420", 00:20:17.849 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:17.849 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:17.849 "prchk_reftag": false, 00:20:17.849 "prchk_guard": false, 00:20:17.849 "hdgst": false, 00:20:17.849 "ddgst": false, 00:20:17.849 "dhchap_key": "key2" 00:20:17.849 } 00:20:17.849 } 00:20:17.849 Got JSON-RPC error response 00:20:17.849 GoRPCClient: error on JSON-RPC call 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.849 2024/07/25 07:34:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:17.849 request: 00:20:17.849 { 00:20:17.849 "method": "bdev_nvme_attach_controller", 00:20:17.849 "params": { 00:20:17.849 "name": "nvme0", 00:20:17.849 "trtype": "tcp", 00:20:17.849 "traddr": "10.0.0.1", 00:20:17.849 "adrfam": "ipv4", 00:20:17.849 "trsvcid": "4420", 00:20:17.849 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:17.849 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:17.849 "prchk_reftag": false, 00:20:17.849 "prchk_guard": false, 00:20:17.849 "hdgst": false, 00:20:17.849 "ddgst": false, 00:20:17.849 "dhchap_key": "key1", 00:20:17.849 "dhchap_ctrlr_key": "ckey2" 00:20:17.849 } 00:20:17.849 } 00:20:17.849 Got JSON-RPC error response 00:20:17.849 GoRPCClient: error on JSON-RPC call 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.849 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.850 rmmod nvme_tcp 00:20:17.850 rmmod nvme_fabrics 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91716 ']' 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91716 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91716 ']' 00:20:17.850 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91716 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91716 00:20:18.109 killing process with pid 91716 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91716' 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91716 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91716 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:18.109 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:20:18.368 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:18.368 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:18.368 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:18.368 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:18.368 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:18.368 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:18.368 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:18.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:19.196 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:19.196 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:19.196 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zDT /tmp/spdk.key-null.V86 /tmp/spdk.key-sha256.NR0 /tmp/spdk.key-sha384.CSI /tmp/spdk.key-sha512.dMb /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:19.196 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:19.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:19.765 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:19.765 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:19.765 00:20:19.765 real 0m32.403s 00:20:19.765 user 0m29.877s 00:20:19.765 sys 0m4.616s 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.765 ************************************ 00:20:19.765 END TEST nvmf_auth_host 00:20:19.765 ************************************ 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.765 ************************************ 00:20:19.765 START TEST nvmf_digest 00:20:19.765 ************************************ 00:20:19.765 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:20.025 * Looking for test storage... 00:20:20.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.025 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.025 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:20.025 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.025 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:20.026 Cannot find device "nvmf_tgt_br" 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.026 Cannot find device "nvmf_tgt_br2" 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:20.026 Cannot find device "nvmf_tgt_br" 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:20.026 Cannot find device "nvmf_tgt_br2" 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:20.026 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:20.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:20:20.287 00:20:20.287 --- 10.0.0.2 ping statistics --- 00:20:20.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.287 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:20.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:20.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:20:20.287 00:20:20.287 --- 10.0.0.3 ping statistics --- 00:20:20.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.287 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:20.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:20:20.287 00:20:20.287 --- 10.0.0.1 ping statistics --- 00:20:20.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.287 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:20.287 ************************************ 00:20:20.287 START TEST nvmf_digest_clean 00:20:20.287 ************************************ 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93280 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93280 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93280 ']' 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.287 07:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.287 [2024-07-25 07:34:52.939883] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:20.287 [2024-07-25 07:34:52.939951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.547 [2024-07-25 07:34:53.066167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.547 [2024-07-25 07:34:53.154580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.547 [2024-07-25 07:34:53.154623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.547 [2024-07-25 07:34:53.154630] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.547 [2024-07-25 07:34:53.154634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.547 [2024-07-25 07:34:53.154638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.547 [2024-07-25 07:34:53.154672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.121 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:21.380 null0 00:20:21.380 [2024-07-25 07:34:53.921375] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.380 [2024-07-25 07:34:53.945422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93332 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93332 /var/tmp/bperf.sock 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93332 ']' 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.380 07:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:21.380 [2024-07-25 07:34:53.998740] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:21.380 [2024-07-25 07:34:53.998812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93332 ] 00:20:21.640 [2024-07-25 07:34:54.135309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.640 [2024-07-25 07:34:54.224038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.208 07:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.208 07:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:22.208 07:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:22.208 07:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:22.208 07:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:22.467 07:34:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.467 07:34:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.726 nvme0n1 00:20:22.726 07:34:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:22.726 07:34:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:22.726 Running I/O for 2 seconds... 00:20:25.261 00:20:25.261 Latency(us) 00:20:25.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.261 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:25.261 nvme0n1 : 2.00 23235.55 90.76 0.00 0.00 5502.87 2561.34 15339.43 00:20:25.261 =================================================================================================================== 00:20:25.261 Total : 23235.55 90.76 0.00 0.00 5502.87 2561.34 15339.43 00:20:25.261 0 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:25.261 | select(.opcode=="crc32c") 00:20:25.261 | "\(.module_name) \(.executed)"' 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93332 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93332 ']' 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93332 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93332 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93332' 00:20:25.261 killing process with pid 93332 00:20:25.261 Received shutdown signal, test time was about 2.000000 seconds 00:20:25.261 00:20:25.261 Latency(us) 00:20:25.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.261 =================================================================================================================== 00:20:25.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93332 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93332 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93420 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93420 /var/tmp/bperf.sock 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93420 ']' 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.261 07:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.261 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:25.261 Zero copy mechanism will not be used. 00:20:25.261 [2024-07-25 07:34:57.900660] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:25.261 [2024-07-25 07:34:57.900746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93420 ] 00:20:25.521 [2024-07-25 07:34:58.025377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.521 [2024-07-25 07:34:58.118969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.096 07:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.096 07:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:26.096 07:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:26.096 07:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:26.096 07:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:26.365 07:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.365 07:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.624 nvme0n1 00:20:26.624 07:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:26.624 07:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:26.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:26.883 Zero copy mechanism will not be used. 00:20:26.883 Running I/O for 2 seconds... 00:20:28.789 00:20:28.789 Latency(us) 00:20:28.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.789 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:28.789 nvme0n1 : 2.00 8054.96 1006.87 0.00 0.00 1983.31 604.56 14366.41 00:20:28.789 =================================================================================================================== 00:20:28.789 Total : 8054.96 1006.87 0.00 0.00 1983.31 604.56 14366.41 00:20:28.789 0 00:20:28.789 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:28.789 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:28.789 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:28.789 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:28.789 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:28.789 | select(.opcode=="crc32c") 00:20:28.789 | "\(.module_name) \(.executed)"' 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93420 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93420 ']' 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93420 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93420 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:29.048 killing process with pid 93420 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93420' 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93420 00:20:29.048 Received shutdown signal, test time was about 2.000000 seconds 00:20:29.048 00:20:29.048 Latency(us) 00:20:29.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.048 =================================================================================================================== 00:20:29.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.048 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93420 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93505 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93505 /var/tmp/bperf.sock 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93505 ']' 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.306 07:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.306 [2024-07-25 07:35:01.869990] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:29.306 [2024-07-25 07:35:01.870065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93505 ] 00:20:29.306 [2024-07-25 07:35:02.006310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.565 [2024-07-25 07:35:02.093211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.131 07:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.131 07:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:30.131 07:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:30.131 07:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:30.131 07:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:30.389 07:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.389 07:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.646 nvme0n1 00:20:30.646 07:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:30.646 07:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:30.646 Running I/O for 2 seconds... 00:20:33.175 00:20:33.175 Latency(us) 00:20:33.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.175 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:33.175 nvme0n1 : 2.00 29732.38 116.14 0.00 0.00 4299.27 1731.41 10932.21 00:20:33.175 =================================================================================================================== 00:20:33.175 Total : 29732.38 116.14 0.00 0.00 4299.27 1731.41 10932.21 00:20:33.175 0 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:33.175 | select(.opcode=="crc32c") 00:20:33.175 | "\(.module_name) \(.executed)"' 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93505 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93505 ']' 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93505 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93505 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93505' 00:20:33.175 killing process with pid 93505 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93505 00:20:33.175 Received shutdown signal, test time was about 2.000000 seconds 00:20:33.175 00:20:33.175 Latency(us) 00:20:33.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.175 =================================================================================================================== 00:20:33.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93505 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93595 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93595 /var/tmp/bperf.sock 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93595 ']' 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.175 07:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:33.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:33.175 Zero copy mechanism will not be used. 00:20:33.175 [2024-07-25 07:35:05.795216] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:33.175 [2024-07-25 07:35:05.795270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93595 ] 00:20:33.433 [2024-07-25 07:35:05.921703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.433 [2024-07-25 07:35:06.006366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.999 07:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.999 07:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:33.999 07:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:33.999 07:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:33.999 07:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:34.258 07:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.258 07:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.517 nvme0n1 00:20:34.517 07:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:34.517 07:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:34.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:34.775 Zero copy mechanism will not be used. 00:20:34.776 Running I/O for 2 seconds... 00:20:36.682 00:20:36.682 Latency(us) 00:20:36.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.682 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:36.682 nvme0n1 : 2.00 6998.02 874.75 0.00 0.00 2282.76 1874.50 6296.03 00:20:36.682 =================================================================================================================== 00:20:36.682 Total : 6998.02 874.75 0.00 0.00 2282.76 1874.50 6296.03 00:20:36.682 0 00:20:36.682 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:36.682 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:36.682 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:36.682 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:36.682 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:36.682 | select(.opcode=="crc32c") 00:20:36.682 | "\(.module_name) \(.executed)"' 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93595 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93595 ']' 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93595 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93595 00:20:36.942 killing process with pid 93595 00:20:36.942 Received shutdown signal, test time was about 2.000000 seconds 00:20:36.942 00:20:36.942 Latency(us) 00:20:36.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.942 =================================================================================================================== 00:20:36.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93595' 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93595 00:20:36.942 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93595 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93280 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93280 ']' 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93280 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93280 00:20:37.201 killing process with pid 93280 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:37.201 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:37.202 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93280' 00:20:37.202 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93280 00:20:37.202 07:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93280 00:20:37.461 ************************************ 00:20:37.461 END TEST nvmf_digest_clean 00:20:37.461 ************************************ 00:20:37.461 00:20:37.461 real 0m17.161s 00:20:37.461 user 0m30.864s 00:20:37.461 sys 0m4.804s 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:37.461 ************************************ 00:20:37.461 START TEST nvmf_digest_error 00:20:37.461 ************************************ 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93707 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93707 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93707 ']' 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.461 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.461 [2024-07-25 07:35:10.162438] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:37.461 [2024-07-25 07:35:10.162518] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.721 [2024-07-25 07:35:10.300492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.721 [2024-07-25 07:35:10.432749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.721 [2024-07-25 07:35:10.432810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.721 [2024-07-25 07:35:10.432818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.721 [2024-07-25 07:35:10.432824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.721 [2024-07-25 07:35:10.432829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.721 [2024-07-25 07:35:10.432861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.290 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.290 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:38.290 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.290 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.290 07:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.550 [2024-07-25 07:35:11.040196] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.550 null0 00:20:38.550 [2024-07-25 07:35:11.190801] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.550 [2024-07-25 07:35:11.214888] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93751 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93751 /var/tmp/bperf.sock 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93751 ']' 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.550 07:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.550 [2024-07-25 07:35:11.275624] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:38.550 [2024-07-25 07:35:11.275685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93751 ] 00:20:38.809 [2024-07-25 07:35:11.411648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.809 [2024-07-25 07:35:11.490740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.378 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.378 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:39.378 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.378 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.637 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:39.637 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.637 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.637 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.637 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.637 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.901 nvme0n1 00:20:39.901 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:39.901 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.901 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.901 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.901 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:39.901 07:35:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:40.162 Running I/O for 2 seconds... 00:20:40.162 [2024-07-25 07:35:12.681751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.681810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.681821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.691804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.691833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.691841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.703056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.703083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.703108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.712562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.712589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.712597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.722410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.722436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.722443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.734886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.734915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.734922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.746869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.746898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.746906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.755771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.755800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.755807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.768307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.768335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.768342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.778645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.778674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.778682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.789030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.789074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.789082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.798430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.798458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.798487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.807770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.807799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.807823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.817437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.817463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.817470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.829339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.829365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.829372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.839603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.839631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.839638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.847994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.848024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.162 [2024-07-25 07:35:12.848032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.162 [2024-07-25 07:35:12.860301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.162 [2024-07-25 07:35:12.860331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.163 [2024-07-25 07:35:12.860339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.163 [2024-07-25 07:35:12.869401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.163 [2024-07-25 07:35:12.869430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.163 [2024-07-25 07:35:12.869438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.163 [2024-07-25 07:35:12.879353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.163 [2024-07-25 07:35:12.879380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.163 [2024-07-25 07:35:12.879388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.163 [2024-07-25 07:35:12.889803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.163 [2024-07-25 07:35:12.889830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.163 [2024-07-25 07:35:12.889837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.900917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.900946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.900970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.908999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.909027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.909034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.920903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.920930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.920938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.931211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.931238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.931246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.939795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.939825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.939832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.950877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.950905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.950912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.960988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.961013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.961020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.970406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.970433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.970440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.982015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.982042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.982049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:12.991526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:12.991554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:12.991563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:13.000706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:13.000733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:13.000740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:13.011713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:13.011741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:13.011764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:13.020719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:13.020747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:13.020754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.423 [2024-07-25 07:35:13.030645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.423 [2024-07-25 07:35:13.030672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.423 [2024-07-25 07:35:13.030678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.041650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.041678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.041686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.050419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.050448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.050456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.061527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.061554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.061579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.072088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.072125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.072134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.080670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.080697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.080720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.091580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.091609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.091616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.103043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.103071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.103095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.112174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.112200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.112208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.122468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.122515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.122539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.131725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.131765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.131788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.142390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.142419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.142426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.424 [2024-07-25 07:35:13.151047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.424 [2024-07-25 07:35:13.151076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.424 [2024-07-25 07:35:13.151083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.685 [2024-07-25 07:35:13.161625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.685 [2024-07-25 07:35:13.161657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.685 [2024-07-25 07:35:13.161665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.685 [2024-07-25 07:35:13.172461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.685 [2024-07-25 07:35:13.172490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.685 [2024-07-25 07:35:13.172498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.685 [2024-07-25 07:35:13.181503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.685 [2024-07-25 07:35:13.181530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.685 [2024-07-25 07:35:13.181538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.685 [2024-07-25 07:35:13.193073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.685 [2024-07-25 07:35:13.193103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.685 [2024-07-25 07:35:13.193126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.685 [2024-07-25 07:35:13.204208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.685 [2024-07-25 07:35:13.204234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.685 [2024-07-25 07:35:13.204242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.685 [2024-07-25 07:35:13.213168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.685 [2024-07-25 07:35:13.213193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.213201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.225218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.225245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.225252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.235029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.235056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.235064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.246949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.246977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.246984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.257056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.257084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.257091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.268465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.268491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.268498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.277789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.277817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.277825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.287331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.287359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.287383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.296153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.296178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.296185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.307219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.307248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.307256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.316863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.316892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.316899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.326118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.326154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.326162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.336712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.336737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.336744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.346408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.346435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.346442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.355687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.355715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.355722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.366217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.366242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.366250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.377058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.377084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.377091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.386610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.386637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.386644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.397626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.397651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.397675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.686 [2024-07-25 07:35:13.407250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.686 [2024-07-25 07:35:13.407277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.686 [2024-07-25 07:35:13.407284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.946 [2024-07-25 07:35:13.418845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.946 [2024-07-25 07:35:13.418884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.946 [2024-07-25 07:35:13.418895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.946 [2024-07-25 07:35:13.429389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.946 [2024-07-25 07:35:13.429414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.946 [2024-07-25 07:35:13.429422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.946 [2024-07-25 07:35:13.438444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.946 [2024-07-25 07:35:13.438472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.946 [2024-07-25 07:35:13.438486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.946 [2024-07-25 07:35:13.449357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.449384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.449392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.458962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.458990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.458998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.468718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.468747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.468755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.477822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.477849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.477857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.490640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.490669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.490676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.499127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.499165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.499172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.510562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.510590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.510597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.520653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.520680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.520688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.531125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.531152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.531176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.541553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.541580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.541588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.550510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.550540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.550563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.560963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.560991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.560998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.570302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.570329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.570336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.580148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.580173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.580181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.590708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.590734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.599779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.599808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.599830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.609927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.609955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.609963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.619734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.619788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.630204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.630231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.630238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.640282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.640308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.640315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.650153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.650178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.650185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.661356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.661382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.661390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.947 [2024-07-25 07:35:13.672045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:40.947 [2024-07-25 07:35:13.672072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.947 [2024-07-25 07:35:13.672078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.680660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.680690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.680698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.690860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.690890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.690898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.702098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.702151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.702158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.711904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.711932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.711955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.721255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.721281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.721288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.732177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.732203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.732210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.743042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.743069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.743076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.753686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.753713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.753720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.762179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.762206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.762214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.772681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.772708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.772716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.784028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.208 [2024-07-25 07:35:13.784054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.208 [2024-07-25 07:35:13.784061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.208 [2024-07-25 07:35:13.794175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.794201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.794208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.805069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.805100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.805109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.816360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.816397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.825641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.825668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.825691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.836924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.836950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.836958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.846485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.846528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.846535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.855877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.855903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.855910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.866655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.866679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.866686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.875838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.875868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.875875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.887360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.887388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.887397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.897501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.897528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.897535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.908381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.908407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.908414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.916949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.916976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.916983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.928135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.928163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.928170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.209 [2024-07-25 07:35:13.938331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.209 [2024-07-25 07:35:13.938359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.209 [2024-07-25 07:35:13.938366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:13.947926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:13.947954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:13.947961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:13.957660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:13.957687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:13.957694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:13.967748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:13.967776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:13.967783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:13.978873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:13.978900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:13.978908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:13.988444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:13.988490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:13.988497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:13.998941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:13.998971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:13.998980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.009391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.009418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.009425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.020189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.020215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.020223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.030057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.030082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.030089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.041486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.041513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.041520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.049436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.049463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.061486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.061512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.061519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.072066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.072096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.072103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.080449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.080474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.080481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.090103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.090154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.090161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.102083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.102110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.102142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.111142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.111169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.111176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.122399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.122427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.122434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.133753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.133780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.133787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.141605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.141631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.141639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.152300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.470 [2024-07-25 07:35:14.152326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.470 [2024-07-25 07:35:14.152333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.470 [2024-07-25 07:35:14.163395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.471 [2024-07-25 07:35:14.163423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.471 [2024-07-25 07:35:14.163446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.471 [2024-07-25 07:35:14.172790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.471 [2024-07-25 07:35:14.172816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.471 [2024-07-25 07:35:14.172823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.471 [2024-07-25 07:35:14.183030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.471 [2024-07-25 07:35:14.183057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.471 [2024-07-25 07:35:14.183064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.471 [2024-07-25 07:35:14.192378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.471 [2024-07-25 07:35:14.192403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.471 [2024-07-25 07:35:14.192410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.731 [2024-07-25 07:35:14.204673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.731 [2024-07-25 07:35:14.204705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.731 [2024-07-25 07:35:14.204713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.731 [2024-07-25 07:35:14.215626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.731 [2024-07-25 07:35:14.215656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.731 [2024-07-25 07:35:14.215665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.731 [2024-07-25 07:35:14.225932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.731 [2024-07-25 07:35:14.225962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.731 [2024-07-25 07:35:14.225969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.731 [2024-07-25 07:35:14.236193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.731 [2024-07-25 07:35:14.236221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.731 [2024-07-25 07:35:14.236230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.731 [2024-07-25 07:35:14.244066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.731 [2024-07-25 07:35:14.244096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.731 [2024-07-25 07:35:14.244103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.731 [2024-07-25 07:35:14.256330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.256358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.256366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.268297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.268326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.268333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.278417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.278443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.278450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.288658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.288685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.288692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.297549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.297575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.297582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.308828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.308854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.308862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.319057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.319086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.319109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.330126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.330157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.330165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.339701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.339752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.339760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.351395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.351425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.362081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.362108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.362140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.370936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.370964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.370971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.382111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.382161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.382169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.391102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.391139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.391147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.401660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.401687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.401695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.412778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.412806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.412813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.423693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.423731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.423739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.433125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.433160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.433168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.443384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.443410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.443418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.453464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.453491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.453514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.732 [2024-07-25 07:35:14.464023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.732 [2024-07-25 07:35:14.464054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.732 [2024-07-25 07:35:14.464062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.473393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.473421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.473428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.484690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.484718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.484740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.493682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.493711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.493718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.503204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.503232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.503240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.513013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.513043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.513067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.525125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.525162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.525169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.534638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.534668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.534676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.544632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.544658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.544665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.555678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.555707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.566065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.566093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.566116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.577176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.577203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.577211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.587068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.587098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.587106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.598519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.598549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.598556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.608579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.608607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.608615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.618481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.618522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.618530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.628444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.628473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.628480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.639184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.639213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.639236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.649033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.993 [2024-07-25 07:35:14.649063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.993 [2024-07-25 07:35:14.649086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.993 [2024-07-25 07:35:14.660167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf65e30) 00:20:41.994 [2024-07-25 07:35:14.660198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.994 [2024-07-25 07:35:14.660208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.994 00:20:41.994 Latency(us) 00:20:41.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:41.994 nvme0n1 : 2.00 24749.03 96.68 0.00 0.00 5165.47 2418.25 14080.22 00:20:41.994 =================================================================================================================== 00:20:41.994 Total : 24749.03 96.68 0.00 0.00 5165.47 2418.25 14080.22 00:20:41.994 0 00:20:41.994 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:41.994 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:41.994 | .driver_specific 00:20:41.994 | .nvme_error 00:20:41.994 | .status_code 00:20:41.994 | .command_transient_transport_error' 00:20:41.994 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:41.994 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93751 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93751 ']' 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93751 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93751 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93751' 00:20:42.254 killing process with pid 93751 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93751 00:20:42.254 Received shutdown signal, test time was about 2.000000 seconds 00:20:42.254 00:20:42.254 Latency(us) 00:20:42.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.254 =================================================================================================================== 00:20:42.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.254 07:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93751 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93837 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93837 /var/tmp/bperf.sock 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93837 ']' 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:42.514 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:42.515 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:42.515 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.515 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.515 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:42.515 Zero copy mechanism will not be used. 00:20:42.515 [2024-07-25 07:35:15.156906] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:42.515 [2024-07-25 07:35:15.156970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93837 ] 00:20:42.774 [2024-07-25 07:35:15.295139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.774 [2024-07-25 07:35:15.379453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.343 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.343 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:43.343 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.343 07:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.602 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:43.602 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.602 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.602 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.602 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.602 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.862 nvme0n1 00:20:43.862 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:43.862 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.862 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.862 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.862 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:43.862 07:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:43.862 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:43.862 Zero copy mechanism will not be used. 00:20:43.862 Running I/O for 2 seconds... 00:20:43.862 [2024-07-25 07:35:16.563408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.563453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.563463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.862 [2024-07-25 07:35:16.569250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.569281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.569288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.862 [2024-07-25 07:35:16.573927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.573957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.573963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.862 [2024-07-25 07:35:16.578951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.578983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.578991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.862 [2024-07-25 07:35:16.582395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.582422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.582429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.862 [2024-07-25 07:35:16.586767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.586797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.586805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.862 [2024-07-25 07:35:16.590351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.590379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.590386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.862 [2024-07-25 07:35:16.594481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:43.862 [2024-07-25 07:35:16.594515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.862 [2024-07-25 07:35:16.594523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.599275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.599306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.599314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.604025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.604053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.604060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.607996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.608024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.608032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.611387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.611425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.611432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.615600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.615631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.615639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.619208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.619238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.619245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.622884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.622914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.622922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.626657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.626686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.626693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.630154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.630180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.630187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.633706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.633734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.633756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.637688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.637715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.637722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.641920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.641948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.641955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.645357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.645383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.645389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.649042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.649070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.649078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.653347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.653375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.653381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.657044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.657072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.657079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.660706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.660732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.660739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.664749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.664779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.664786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.669316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.669345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.669352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.674138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.674165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.674172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.677376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.677402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.677409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.681141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.681166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.681173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.685610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.685637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.685644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.689900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.689928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.689934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.693241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.693270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.124 [2024-07-25 07:35:16.693278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.124 [2024-07-25 07:35:16.697379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.124 [2024-07-25 07:35:16.697405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.697412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.700428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.700455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.700462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.704589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.704618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.704626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.708946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.708997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.712182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.712210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.712217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.716136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.716163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.716170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.720668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.720697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.720704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.724949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.724976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.724983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.728049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.728079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.728087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.732115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.732155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.732163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.736350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.736378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.736386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.739620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.739650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.743851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.743882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.743889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.747115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.747151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.747159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.750948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.750977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.750984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.755732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.755764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.755771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.759194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.759221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.759228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.763166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.763193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.763201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.766760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.766790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.766798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.770769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.770800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.770808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.774584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.774613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.774620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.778545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.778574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.778582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.782148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.782186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.782194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.786207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.786233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.786241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.789321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.789359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.789366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.793767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.793794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.793801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.796841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.796868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.125 [2024-07-25 07:35:16.796874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.125 [2024-07-25 07:35:16.801234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.125 [2024-07-25 07:35:16.801260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.801267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.804805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.804833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.804840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.808425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.808452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.808459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.811952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.811979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.811986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.815733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.815761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.815768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.820212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.820239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.820263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.823461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.823491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.823498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.827115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.827155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.827163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.831659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.831688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.831695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.836311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.836340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.836348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.839460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.839501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.839508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.843590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.843619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.843626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.848252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.848280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.848287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.126 [2024-07-25 07:35:16.852480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.126 [2024-07-25 07:35:16.852527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.126 [2024-07-25 07:35:16.852535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.856359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.856390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.856399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.860240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.860271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.860280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.864027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.864058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.864066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.868306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.868337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.868345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.871804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.871833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.871841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.875931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.875958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.875965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.880602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.880634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.880642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.885061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.885090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.885113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.888296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.888324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.888346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.892268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.892295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.892302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.896053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.896081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.896088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.899368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.899399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.899408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.902984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.903020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.903029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.907030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.907064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.907072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.911598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.911630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.911638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.914915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.914947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.914954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.918517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.918547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.918554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.922177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.922205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.922212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.926176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.388 [2024-07-25 07:35:16.926203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.388 [2024-07-25 07:35:16.926211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.388 [2024-07-25 07:35:16.929659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.929688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.929695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.933310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.933341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.933349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.936790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.936822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.936829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.941070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.941099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.941107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.944216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.944245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.944253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.948432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.948461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.948485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.952369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.952400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.952409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.955712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.955744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.955752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.959767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.959797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.959805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.963300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.963330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.963338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.967147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.967175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.967183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.970819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.970849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.970858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.974352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.974385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.974392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.979057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.979091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.979099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.982323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.982354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.982362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.986139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.986166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.986174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.989801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.989832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.989839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.994408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.994436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.994443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:16.998448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:16.998480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:16.998488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.002340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.002369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.002376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.006169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.006195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.006202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.009472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.009502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.009509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.012863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.012892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.012899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.016468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.016498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.016506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.020473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.020502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.020509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.023523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.023553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.023560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.389 [2024-07-25 07:35:17.028207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.389 [2024-07-25 07:35:17.028237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.389 [2024-07-25 07:35:17.028244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.031761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.031792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.031800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.035600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.035630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.035637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.039793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.039825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.039833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.043785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.043815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.043823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.047463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.047494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.047501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.052235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.052275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.052283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.055820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.055853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.055861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.060007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.060038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.060046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.064077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.064109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.064126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.067435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.067465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.067473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.071881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.071912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.075685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.075716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.075724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.079392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.079425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.079432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.083210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.083248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.086602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.086629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.086636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.090219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.090246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.090253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.094558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.094588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.094595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.097787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.097825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.097832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.101477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.101505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.101511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.105824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.105852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.105875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.109779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.109812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.109835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.112855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.112886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.112893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.390 [2024-07-25 07:35:17.117308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.390 [2024-07-25 07:35:17.117339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.390 [2024-07-25 07:35:17.117347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.121383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.121414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.121422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.125022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.125053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.125061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.128866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.128898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.128917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.133073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.133104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.133139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.135757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.135787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.135794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.140504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.140531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.140538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.144773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.144802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.144810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.149589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.149619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.149626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.153371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.153401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.153408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.651 [2024-07-25 07:35:17.156448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.651 [2024-07-25 07:35:17.156475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.651 [2024-07-25 07:35:17.156482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.160997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.161027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.161034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.165739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.165766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.165773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.170111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.170160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.170168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.173414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.173440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.173447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.177008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.177036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.177043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.180746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.180774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.180781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.184503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.184529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.184536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.188615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.188642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.188665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.192256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.192286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.192294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.195877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.195912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.195923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.199370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.199402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.199410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.203339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.203370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.203378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.206662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.206691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.206699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.210562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.210590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.210598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.215005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.215035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.215043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.218281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.218308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.218315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.221800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.221828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.221834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.225485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.225512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.225519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.229806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.229834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.229858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.233078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.233106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.233123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.236858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.236886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.236893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.241458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.241486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.241494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.245360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.245387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.245394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.248683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.248710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.248717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.252205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.252234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.252241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.255814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.255843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.255851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.259602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.259634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.259641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.263969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.264000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.264007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.267555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.267586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.267593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.271680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.271712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.271719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.276012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.276042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.276049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.279725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.279754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.279762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.283687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.283717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.283724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.286879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.286905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.286912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.290791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.290820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.290827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.295162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.295189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.295196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.299616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.299645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.299653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.302818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.302848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.302855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.307384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.307416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.307423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.311688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.311720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.311727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.314725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.314756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.314763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.319102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.319140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.319147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.323457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.323494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.327327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.327357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.327365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.330805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.330835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.330843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.334921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.334950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.334957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.339812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.339841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.339848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.343215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.343242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.343250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.652 [2024-07-25 07:35:17.347164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.652 [2024-07-25 07:35:17.347191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.652 [2024-07-25 07:35:17.347199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.351276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.351305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.351312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.355301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.355331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.355338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.359699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.359730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.359737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.363116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.363152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.363159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.367022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.367052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.367059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.370613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.370644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.370651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.374208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.374235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.374243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.377388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.377416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.377423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.653 [2024-07-25 07:35:17.381303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.653 [2024-07-25 07:35:17.381342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.653 [2024-07-25 07:35:17.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.386100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.386140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.386149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.389669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.389699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.389708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.393916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.393946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.393969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.398988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.399020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.399028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.403492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.403526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.403535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.407958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.407987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.408011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.411419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.411453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.411462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.415335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.415367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.415374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.419019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.419048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.419056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.422931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.422961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.914 [2024-07-25 07:35:17.422968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.914 [2024-07-25 07:35:17.427294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.914 [2024-07-25 07:35:17.427328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.427337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.431329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.431359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.431367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.434463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.434497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.434512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.438442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.438469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.438482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.443429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.443459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.443467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.447746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.447774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.447782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.450527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.450567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.450574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.454727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.454757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.454765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.458769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.458798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.458805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.462008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.462035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.462042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.466091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.466145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.466153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.470005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.470033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.470057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.473381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.473408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.473415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.477688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.477718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.477726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.481049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.481077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.481085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.485641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.485672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.485679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.489363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.489393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.489400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.493330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.493359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.493366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.497319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.497348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.497356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.500529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.500556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.500579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.504369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.504397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.504405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.508756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.508784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.508791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.512001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.512028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.512035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.515956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.515983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.516006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.519929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.519955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.519962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.523629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.523658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.523665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.527841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.915 [2024-07-25 07:35:17.527880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.915 [2024-07-25 07:35:17.527887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.915 [2024-07-25 07:35:17.531170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.531197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.531204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.535552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.535583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.535591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.538966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.538995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.539003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.542265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.542293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.542301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.545822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.545849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.545856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.549449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.549476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.549482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.553031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.553061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.553069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.556592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.556622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.556629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.560110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.560146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.560154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.564054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.564081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.564088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.567481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.567511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.567518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.571647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.571679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.571686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.576083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.576111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.576143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.579951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.579978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.579986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.583533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.583563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.583570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.587420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.587450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.587457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.592098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.592133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.592157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.595678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.595707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.595714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.599421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.599451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.599459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.604058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.604086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.607443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.607472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.607480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.611571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.611600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.611607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.615708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.615736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.615743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.619637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.619664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.619671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.623503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.623535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.623543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.627973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.628000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.916 [2024-07-25 07:35:17.628007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.916 [2024-07-25 07:35:17.631449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.916 [2024-07-25 07:35:17.631490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.917 [2024-07-25 07:35:17.631497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.917 [2024-07-25 07:35:17.635485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.917 [2024-07-25 07:35:17.635516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.917 [2024-07-25 07:35:17.635523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.917 [2024-07-25 07:35:17.639314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.917 [2024-07-25 07:35:17.639343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.917 [2024-07-25 07:35:17.639350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.917 [2024-07-25 07:35:17.642847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:44.917 [2024-07-25 07:35:17.642880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.917 [2024-07-25 07:35:17.642888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.646877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.646910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.646918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.650939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.650971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.650979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.653865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.653896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.653904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.658430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.658461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.658469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.662918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.662949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.662956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.666588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.666618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.666626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.670273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.670302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.670309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.674019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.674047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.674054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.678205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.678231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.678238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.681628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.681656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.685389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.685416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.685423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.689406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.689433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.689440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.693062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.693089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.693096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.696836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.696864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.696871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.700505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.178 [2024-07-25 07:35:17.700549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.178 [2024-07-25 07:35:17.700556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.178 [2024-07-25 07:35:17.704079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.704107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.704140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.707942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.707969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.707977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.712380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.712407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.712413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.716957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.716984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.716991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.721647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.721674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.721681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.726383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.726411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.726419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.730015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.730044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.730051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.733939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.733966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.733973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.738402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.738429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.738436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.741566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.741593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.741599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.745688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.745714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.745721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.750099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.750151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.750158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.754138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.754170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.758028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.758055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.758062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.761499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.761527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.761534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.766012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.766040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.766063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.769841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.769868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.769891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.773260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.773289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.773296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.777079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.777109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.777126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.780900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.780927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.780934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.784971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.784998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.785006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.788138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.788174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.788182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.792286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.179 [2024-07-25 07:35:17.792315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.179 [2024-07-25 07:35:17.792323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.179 [2024-07-25 07:35:17.796796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.796823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.796830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.800205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.800230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.800237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.804245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.804271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.804277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.808791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.808819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.808826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.812842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.812869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.812893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.816179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.816204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.816211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.820736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.820764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.820771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.824442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.824472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.824480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.827913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.827944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.827952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.832014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.832042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.832049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.835704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.835744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.835751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.839001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.839031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.839039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.843414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.843445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.843452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.846344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.846373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.846380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.850325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.850354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.850361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.854731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.854763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.854770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.859214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.859245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.859253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.863089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.863128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.863136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.866274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.866302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.866309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.870242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.870269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.870276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.875091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.875136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.875145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.878625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.878655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.878662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.882790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.882822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.882831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.887699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.887729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.887736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.180 [2024-07-25 07:35:17.892087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.180 [2024-07-25 07:35:17.892124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.180 [2024-07-25 07:35:17.892132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.181 [2024-07-25 07:35:17.895183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.181 [2024-07-25 07:35:17.895215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.181 [2024-07-25 07:35:17.895223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.181 [2024-07-25 07:35:17.899852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.181 [2024-07-25 07:35:17.899892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.181 [2024-07-25 07:35:17.899914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.181 [2024-07-25 07:35:17.904322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.181 [2024-07-25 07:35:17.904350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.181 [2024-07-25 07:35:17.904356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.181 [2024-07-25 07:35:17.907839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.181 [2024-07-25 07:35:17.907879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.181 [2024-07-25 07:35:17.907886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.912143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.912174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.912181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.916855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.916886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.916894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.921637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.921668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.921676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.926591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.926623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.926631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.930041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.930070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.930077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.933832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.933862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.933869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.938670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.938701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.938709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.941949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.941977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.941984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.946023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.946053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.946060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.950797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.950827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.950835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.954712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.954742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.954749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.957837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.957866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.957873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.962167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.962198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.962205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.965936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.965966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.965973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.969503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.969533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.969541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.973170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.973196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.973203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.977047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.977078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.977085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.980455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.980484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.980492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.984654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.984685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.984692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.988586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.988616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.988623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.992195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.992220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.992227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:17.995724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:17.995756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:17.995763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.000135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.000162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.000169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.003730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.003760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.003767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.007441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.007471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.007478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.011773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.011802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.015241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.015270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.015277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.019192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.019218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.019226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.022995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.023023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.023031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.026323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.026350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.026357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.029965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.029994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.030016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.033158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.033185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.033192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.036880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.036910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.036917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.041804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.041832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.041839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.045036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.045067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.045074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.048411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.048442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.048449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.052016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.052046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.052053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.055654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.055684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.055692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.059367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.059398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.059406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.063441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.063471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.063479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.067410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.067441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.067449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.071290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.071320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.071327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.443 [2024-07-25 07:35:18.075595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.443 [2024-07-25 07:35:18.075624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.443 [2024-07-25 07:35:18.075632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.078565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.078593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.078600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.083275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.083307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.083315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.087815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.087849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.087869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.090833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.090863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.090871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.095104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.095141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.095150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.098660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.098691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.098698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.102760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.102791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.102799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.106312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.106343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.106351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.110042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.110072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.110079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.114212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.114242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.114249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.118858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.118890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.118897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.122430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.122458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.122465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.126547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.126576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.126584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.131422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.131454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.131462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.135966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.135994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.136003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.138821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.138852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.138859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.143405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.143437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.143444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.146762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.146791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.146799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.150767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.150797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.150805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.155347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.155380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.155387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.158816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.158847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.158854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.162843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.162873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.162880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.167659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.167689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.167697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.444 [2024-07-25 07:35:18.171159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.444 [2024-07-25 07:35:18.171188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.444 [2024-07-25 07:35:18.171195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.175378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.175411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.175419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.179783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.179814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.179822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.183151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.183176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.183183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.187539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.187572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.187579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.192082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.192123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.192147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.195190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.195218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.195226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.198944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.198975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.198983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.201911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.201940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.201947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.206077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.206109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.206131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.209944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.209973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.209979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.213815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.213844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.213851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.216864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.216894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.216901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.221212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.221242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.221250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.224788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.224815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.224822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.228743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.228771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.228778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.233347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.233376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.233383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.236757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.236788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.236796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.240589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.240618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.240625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.244193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.244219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.244227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.248360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.248389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.248396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.252119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.252155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.252163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.255273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.255303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.255311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.258858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.258889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.258897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.262436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.262465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.262472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.266264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.266290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.266297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.270785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.270815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.270823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.273774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.273802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.273808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.278219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.278246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.278253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.281096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.281150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.281157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.285534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.285563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.285570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.289750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.289776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.289783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.292677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.292704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.292711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.297273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.297300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.297307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.301893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.301921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.301928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.305083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.305111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.305145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.309347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.309374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.309381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.312967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.312994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.313017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.316445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.316473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.316479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.320734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.320763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.320786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.323885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.323915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.323922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.328572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.328599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.328606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.332999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.333026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.333032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.337331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.337358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.337365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.340562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.340591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.340614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.345128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.345186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.345194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.349798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.349828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.705 [2024-07-25 07:35:18.349851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.705 [2024-07-25 07:35:18.353450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.705 [2024-07-25 07:35:18.353475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.353483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.357589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.357618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.357640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.362242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.362270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.362292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.365603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.365631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.365654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.369622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.369649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.369672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.373273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.373301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.373308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.376814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.376844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.376852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.380471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.380499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.380521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.384562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.384599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.384606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.388187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.388216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.388223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.392189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.392217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.392224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.396255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.396286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.396294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.399428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.399459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.399467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.403163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.403188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.403196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.407013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.407043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.407050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.410590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.410619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.410627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.413925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.413953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.413961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.417709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.417738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.417744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.422076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.422106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.422121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.425803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.425831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.425839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.429477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.429505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.429512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.432784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.432817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.432827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.706 [2024-07-25 07:35:18.436444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.706 [2024-07-25 07:35:18.436476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.706 [2024-07-25 07:35:18.436484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.971 [2024-07-25 07:35:18.439906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.971 [2024-07-25 07:35:18.439938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.971 [2024-07-25 07:35:18.439946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.971 [2024-07-25 07:35:18.445008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.971 [2024-07-25 07:35:18.445045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.971 [2024-07-25 07:35:18.445055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.971 [2024-07-25 07:35:18.448821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.971 [2024-07-25 07:35:18.448855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.971 [2024-07-25 07:35:18.448863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.971 [2024-07-25 07:35:18.453136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.971 [2024-07-25 07:35:18.453166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.971 [2024-07-25 07:35:18.453174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.457994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.458027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.458038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.462085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.462131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.462144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.465778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.465831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.470544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.470577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.470586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.475446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.475480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.475489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.480364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.480394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.480412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.484004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.484034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.484041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.487941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.487971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.487979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.492602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.492632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.492641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.496865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.496896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.496905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.500162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.500190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.500198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.504755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.504787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.504795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.508203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.508230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.508238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.512354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.512382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.512389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.515968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.515997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.516005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.519603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.519634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.519641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.523177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.523206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.523213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.527119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.527157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.527165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.530153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.530177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.530185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.534714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.534744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.534752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.539217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.539245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.539254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.543601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.543631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.543638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:45.972 [2024-07-25 07:35:18.546850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162efd0) 00:20:45.972 [2024-07-25 07:35:18.546881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.972 [2024-07-25 07:35:18.546888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.972 00:20:45.972 Latency(us) 00:20:45.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.972 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:45.972 nvme0n1 : 2.00 7879.33 984.92 0.00 0.00 2027.68 554.48 9501.29 00:20:45.972 =================================================================================================================== 00:20:45.972 Total : 7879.33 984.92 0.00 0.00 2027.68 554.48 9501.29 00:20:45.972 0 00:20:45.972 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:45.972 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:45.972 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:45.972 | .driver_specific 00:20:45.972 | .nvme_error 00:20:45.972 | .status_code 00:20:45.972 | .command_transient_transport_error' 00:20:45.972 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 508 > 0 )) 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93837 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93837 ']' 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93837 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93837 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93837' 00:20:46.238 killing process with pid 93837 00:20:46.238 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93837 00:20:46.238 Received shutdown signal, test time was about 2.000000 seconds 00:20:46.238 00:20:46.238 Latency(us) 00:20:46.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.239 =================================================================================================================== 00:20:46.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.239 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93837 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93918 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93918 /var/tmp/bperf.sock 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93918 ']' 00:20:46.498 07:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:46.498 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.498 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:46.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:46.498 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.498 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.498 [2024-07-25 07:35:19.049742] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:46.498 [2024-07-25 07:35:19.049851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93918 ] 00:20:46.498 [2024-07-25 07:35:19.187016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.756 [2024-07-25 07:35:19.273796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.324 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.324 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:47.324 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.324 07:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.583 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.583 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.583 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.583 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.583 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.583 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.583 nvme0n1 00:20:47.842 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:47.842 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.842 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.842 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.842 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:47.842 07:35:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:47.842 Running I/O for 2 seconds... 00:20:47.842 [2024-07-25 07:35:20.445622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ee5c8 00:20:47.842 [2024-07-25 07:35:20.446352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.842 [2024-07-25 07:35:20.446388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.842 [2024-07-25 07:35:20.454081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fac10 00:20:47.842 [2024-07-25 07:35:20.454788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.842 [2024-07-25 07:35:20.454820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:47.842 [2024-07-25 07:35:20.463617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190de8a8 00:20:47.842 [2024-07-25 07:35:20.464410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.842 [2024-07-25 07:35:20.464437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:47.842 [2024-07-25 07:35:20.472591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fda78 00:20:47.842 [2024-07-25 07:35:20.473026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.842 [2024-07-25 07:35:20.473049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:47.842 [2024-07-25 07:35:20.481425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7538 00:20:47.843 [2024-07-25 07:35:20.482089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.482127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.491869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190df118 00:20:47.843 [2024-07-25 07:35:20.493007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.493034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.499539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e27f0 00:20:47.843 [2024-07-25 07:35:20.501031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.501059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.507180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0788 00:20:47.843 [2024-07-25 07:35:20.507845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.507872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.516322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ec840 00:20:47.843 [2024-07-25 07:35:20.517081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.517107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.526729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e9e10 00:20:47.843 [2024-07-25 07:35:20.527948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.527974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.535797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ed0b0 00:20:47.843 [2024-07-25 07:35:20.537140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.543494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e27f0 00:20:47.843 [2024-07-25 07:35:20.544238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.544266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.551695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fda78 00:20:47.843 [2024-07-25 07:35:20.553195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.553223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.559194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e9e10 00:20:47.843 [2024-07-25 07:35:20.559826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.559853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:47.843 [2024-07-25 07:35:20.568248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e23b8 00:20:47.843 [2024-07-25 07:35:20.568999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:47.843 [2024-07-25 07:35:20.569026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.579642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f46d0 00:20:48.103 [2024-07-25 07:35:20.581097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.581140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.586015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fda78 00:20:48.103 [2024-07-25 07:35:20.586800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.586833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.596864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190df550 00:20:48.103 [2024-07-25 07:35:20.597979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.598007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.605069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190dfdc0 00:20:48.103 [2024-07-25 07:35:20.606208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.606236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.614251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc560 00:20:48.103 [2024-07-25 07:35:20.615495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.615525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.621965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f6020 00:20:48.103 [2024-07-25 07:35:20.622630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.622657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.630832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8618 00:20:48.103 [2024-07-25 07:35:20.631734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.631773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.639652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8618 00:20:48.103 [2024-07-25 07:35:20.640316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.640343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.650624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fbcf0 00:20:48.103 [2024-07-25 07:35:20.652060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.652085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.658305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fd208 00:20:48.103 [2024-07-25 07:35:20.659362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.659388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.667262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e88f8 00:20:48.103 [2024-07-25 07:35:20.668049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.668077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.676398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e88f8 00:20:48.103 [2024-07-25 07:35:20.677170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.677197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.686754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e88f8 00:20:48.103 [2024-07-25 07:35:20.687993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.688020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.694682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eea00 00:20:48.103 [2024-07-25 07:35:20.696094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.696134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.702217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f57b0 00:20:48.103 [2024-07-25 07:35:20.702853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.702880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.711035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e38d0 00:20:48.103 [2024-07-25 07:35:20.711676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.711702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.721226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5658 00:20:48.103 [2024-07-25 07:35:20.722221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.722249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.729861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ec840 00:20:48.103 [2024-07-25 07:35:20.730604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.730648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.737947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e3d08 00:20:48.103 [2024-07-25 07:35:20.739425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.739454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.747614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fa7d8 00:20:48.103 [2024-07-25 07:35:20.748708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.748735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:48.103 [2024-07-25 07:35:20.755833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0350 00:20:48.103 [2024-07-25 07:35:20.756945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.103 [2024-07-25 07:35:20.756971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.764897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2510 00:20:48.104 [2024-07-25 07:35:20.766094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.766127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.772616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0bc0 00:20:48.104 [2024-07-25 07:35:20.774079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.774110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.780166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e88f8 00:20:48.104 [2024-07-25 07:35:20.780794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.780820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.789238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2d80 00:20:48.104 [2024-07-25 07:35:20.789971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.789998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.798304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4de8 00:20:48.104 [2024-07-25 07:35:20.799174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.799201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.807160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fb048 00:20:48.104 [2024-07-25 07:35:20.808028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.808055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.815677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e6300 00:20:48.104 [2024-07-25 07:35:20.816181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.816200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.825654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e9168 00:20:48.104 [2024-07-25 07:35:20.826763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.826791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.104 [2024-07-25 07:35:20.834292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5ec8 00:20:48.104 [2024-07-25 07:35:20.835531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.104 [2024-07-25 07:35:20.835562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.843092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190dece0 00:20:48.364 [2024-07-25 07:35:20.844289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.850182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f9b30 00:20:48.364 [2024-07-25 07:35:20.850751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.850782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.859447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eb328 00:20:48.364 [2024-07-25 07:35:20.860245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.860271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.868351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ec840 00:20:48.364 [2024-07-25 07:35:20.869144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.869172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.877970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190de470 00:20:48.364 [2024-07-25 07:35:20.878916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.878945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.887620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5a90 00:20:48.364 [2024-07-25 07:35:20.888628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.888655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.897155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8e88 00:20:48.364 [2024-07-25 07:35:20.898284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.898311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.906559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ec840 00:20:48.364 [2024-07-25 07:35:20.907795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.907821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.914459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f9f68 00:20:48.364 [2024-07-25 07:35:20.915918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.915947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.922289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5a90 00:20:48.364 [2024-07-25 07:35:20.922948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.922974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.931585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ebfd0 00:20:48.364 [2024-07-25 07:35:20.932360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.932386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.940835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e9168 00:20:48.364 [2024-07-25 07:35:20.941718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.941744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.950099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2948 00:20:48.364 [2024-07-25 07:35:20.951121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.951154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.958788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fa7d8 00:20:48.364 [2024-07-25 07:35:20.960370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.960398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.967912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eb328 00:20:48.364 [2024-07-25 07:35:20.968905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.968934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.977257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7100 00:20:48.364 [2024-07-25 07:35:20.977951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.977980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.987242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eea00 00:20:48.364 [2024-07-25 07:35:20.988475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.988502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:20.993798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190edd58 00:20:48.364 [2024-07-25 07:35:20.994456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:20.994489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:21.003378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc560 00:20:48.364 [2024-07-25 07:35:21.003901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.364 [2024-07-25 07:35:21.003922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.364 [2024-07-25 07:35:21.013380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e7818 00:20:48.364 [2024-07-25 07:35:21.014599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.014627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.022491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f46d0 00:20:48.365 [2024-07-25 07:35:21.023810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.023836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.031589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7538 00:20:48.365 [2024-07-25 07:35:21.033035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.033061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.037736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fb480 00:20:48.365 [2024-07-25 07:35:21.038381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.038407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.046001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190feb58 00:20:48.365 [2024-07-25 07:35:21.046648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.046675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.056630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fb480 00:20:48.365 [2024-07-25 07:35:21.057732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.057759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.063660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0ff8 00:20:48.365 [2024-07-25 07:35:21.064335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.064361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.073296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190dfdc0 00:20:48.365 [2024-07-25 07:35:21.073792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.073821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.082348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e6b70 00:20:48.365 [2024-07-25 07:35:21.082963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.082995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.365 [2024-07-25 07:35:21.091366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e1f80 00:20:48.365 [2024-07-25 07:35:21.092094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.365 [2024-07-25 07:35:21.092135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:48.624 [2024-07-25 07:35:21.100030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f81e0 00:20:48.624 [2024-07-25 07:35:21.101008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.624 [2024-07-25 07:35:21.101038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:48.624 [2024-07-25 07:35:21.108695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f81e0 00:20:48.624 [2024-07-25 07:35:21.109824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.624 [2024-07-25 07:35:21.109853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.115701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f96f8 00:20:48.625 [2024-07-25 07:35:21.116321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.116349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.126258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f6890 00:20:48.625 [2024-07-25 07:35:21.127370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.127400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.134016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f20d8 00:20:48.625 [2024-07-25 07:35:21.134538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.134568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.143450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fd640 00:20:48.625 [2024-07-25 07:35:21.144104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.144155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.153025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f5378 00:20:48.625 [2024-07-25 07:35:21.153960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.153988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.162363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f5378 00:20:48.625 [2024-07-25 07:35:21.163287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.163314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.170982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e1f80 00:20:48.625 [2024-07-25 07:35:21.171918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.171944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.180333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e0630 00:20:48.625 [2024-07-25 07:35:21.181321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.181347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.189256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2d80 00:20:48.625 [2024-07-25 07:35:21.189901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.189931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.198420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190de038 00:20:48.625 [2024-07-25 07:35:21.199166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.199193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.206854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e1f80 00:20:48.625 [2024-07-25 07:35:21.207907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.207936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.215401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fd208 00:20:48.625 [2024-07-25 07:35:21.216272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.216298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.225199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f31b8 00:20:48.625 [2024-07-25 07:35:21.226531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.226557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.234204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eaab8 00:20:48.625 [2024-07-25 07:35:21.235659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.235686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.240377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8e88 00:20:48.625 [2024-07-25 07:35:21.240989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.241014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.250207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7da8 00:20:48.625 [2024-07-25 07:35:21.251314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.251342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.259275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f81e0 00:20:48.625 [2024-07-25 07:35:21.260467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.260493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.266987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e6fa8 00:20:48.625 [2024-07-25 07:35:21.268466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.268494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.276701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f4298 00:20:48.625 [2024-07-25 07:35:21.277783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.277810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.284903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7538 00:20:48.625 [2024-07-25 07:35:21.285881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.285907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.293192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ea248 00:20:48.625 [2024-07-25 07:35:21.294039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.294067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.302953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f1430 00:20:48.625 [2024-07-25 07:35:21.304274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.304301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.309172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc560 00:20:48.625 [2024-07-25 07:35:21.309778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.309820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.319713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e01f8 00:20:48.625 [2024-07-25 07:35:21.320734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.320762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.327859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4de8 00:20:48.625 [2024-07-25 07:35:21.328755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.328780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.336429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ef6a8 00:20:48.625 [2024-07-25 07:35:21.337097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.625 [2024-07-25 07:35:21.337134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:48.625 [2024-07-25 07:35:21.347123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ef6a8 00:20:48.625 [2024-07-25 07:35:21.348411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.626 [2024-07-25 07:35:21.348437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:48.626 [2024-07-25 07:35:21.355993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc998 00:20:48.626 [2024-07-25 07:35:21.356904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.626 [2024-07-25 07:35:21.356934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.366023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190df550 00:20:48.885 [2024-07-25 07:35:21.367315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.367345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.375424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2d80 00:20:48.885 [2024-07-25 07:35:21.376793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.376821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.384553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7970 00:20:48.885 [2024-07-25 07:35:21.385949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.385976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.390771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4578 00:20:48.885 [2024-07-25 07:35:21.391407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.391434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.399037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e73e0 00:20:48.885 [2024-07-25 07:35:21.399673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.399701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.407993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2d80 00:20:48.885 [2024-07-25 07:35:21.408730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.408756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.417058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f3e60 00:20:48.885 [2024-07-25 07:35:21.418018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.418044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.426536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e6b70 00:20:48.885 [2024-07-25 07:35:21.427633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.427658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.437824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e6b70 00:20:48.885 [2024-07-25 07:35:21.439417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.439443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.444710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e01f8 00:20:48.885 [2024-07-25 07:35:21.445387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.445413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.455865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f92c0 00:20:48.885 [2024-07-25 07:35:21.456978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.457005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.463601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc560 00:20:48.885 [2024-07-25 07:35:21.464103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.464137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.472946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7970 00:20:48.885 [2024-07-25 07:35:21.473554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.473581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.482333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f31b8 00:20:48.885 [2024-07-25 07:35:21.483086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.490637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e9168 00:20:48.885 [2024-07-25 07:35:21.492026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.492054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.498391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190de470 00:20:48.885 [2024-07-25 07:35:21.499043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.499071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.507847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ef270 00:20:48.885 [2024-07-25 07:35:21.508591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.508618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.517250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7970 00:20:48.885 [2024-07-25 07:35:21.518112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.518149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.526691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5a90 00:20:48.885 [2024-07-25 07:35:21.527722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.527748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.536501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e2c28 00:20:48.885 [2024-07-25 07:35:21.537649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.537677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.544578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f92c0 00:20:48.885 [2024-07-25 07:35:21.545988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.885 [2024-07-25 07:35:21.546017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:48.885 [2024-07-25 07:35:21.552318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ed0b0 00:20:48.885 [2024-07-25 07:35:21.552950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.552976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:48.886 [2024-07-25 07:35:21.561601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eb760 00:20:48.886 [2024-07-25 07:35:21.562323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.562349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:48.886 [2024-07-25 07:35:21.572165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e1710 00:20:48.886 [2024-07-25 07:35:21.573257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.573284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:48.886 [2024-07-25 07:35:21.580410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fac10 00:20:48.886 [2024-07-25 07:35:21.581516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.581541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:48.886 [2024-07-25 07:35:21.589757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7970 00:20:48.886 [2024-07-25 07:35:21.591032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.591059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:48.886 [2024-07-25 07:35:21.597737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ebfd0 00:20:48.886 [2024-07-25 07:35:21.599284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.599309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:48.886 [2024-07-25 07:35:21.605570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ea248 00:20:48.886 [2024-07-25 07:35:21.606222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.606250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:48.886 [2024-07-25 07:35:21.614610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ddc00 00:20:48.886 [2024-07-25 07:35:21.615244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.886 [2024-07-25 07:35:21.615274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.624941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190df550 00:20:49.145 [2024-07-25 07:35:21.625921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.625951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.633254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e1b48 00:20:49.145 [2024-07-25 07:35:21.634224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.634251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.642094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fe720 00:20:49.145 [2024-07-25 07:35:21.643080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.643107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.650858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eb328 00:20:49.145 [2024-07-25 07:35:21.651489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.651519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.661305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f57b0 00:20:49.145 [2024-07-25 07:35:21.662671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.670881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f1430 00:20:49.145 [2024-07-25 07:35:21.672475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.672502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.678330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5220 00:20:49.145 [2024-07-25 07:35:21.679393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.679419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.687964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e3060 00:20:49.145 [2024-07-25 07:35:21.689001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.689027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.697109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190de038 00:20:49.145 [2024-07-25 07:35:21.697735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.697766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.706235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f9b30 00:20:49.145 [2024-07-25 07:35:21.707157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.707186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.715312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ec408 00:20:49.145 [2024-07-25 07:35:21.716203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.716229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.723750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0ff8 00:20:49.145 [2024-07-25 07:35:21.724624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.145 [2024-07-25 07:35:21.724650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:49.145 [2024-07-25 07:35:21.732934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4140 00:20:49.146 [2024-07-25 07:35:21.733800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.733826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.741674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f3e60 00:20:49.146 [2024-07-25 07:35:21.742160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.742186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.750938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e6b70 00:20:49.146 [2024-07-25 07:35:21.751540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.751569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.759475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4de8 00:20:49.146 [2024-07-25 07:35:21.760331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.760357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.767589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f3e60 00:20:49.146 [2024-07-25 07:35:21.768417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.768443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.776574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190de470 00:20:49.146 [2024-07-25 07:35:21.777530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.777555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.784881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0350 00:20:49.146 [2024-07-25 07:35:21.786337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.786363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.794504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ebfd0 00:20:49.146 [2024-07-25 07:35:21.795565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.795591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.801329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ec408 00:20:49.146 [2024-07-25 07:35:21.801883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.801904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.811517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e38d0 00:20:49.146 [2024-07-25 07:35:21.812544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.812571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.819005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0788 00:20:49.146 [2024-07-25 07:35:21.820466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.820492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.827718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190dece0 00:20:49.146 [2024-07-25 07:35:21.828638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.828664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.835987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fac10 00:20:49.146 [2024-07-25 07:35:21.836810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.836838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.845496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e8d30 00:20:49.146 [2024-07-25 07:35:21.846514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.846541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.852920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fda78 00:20:49.146 [2024-07-25 07:35:21.854322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.854350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.862445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f7970 00:20:49.146 [2024-07-25 07:35:21.863446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.863476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:49.146 [2024-07-25 07:35:21.870195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5658 00:20:49.146 [2024-07-25 07:35:21.871062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.146 [2024-07-25 07:35:21.871088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:49.405 [2024-07-25 07:35:21.880313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8a50 00:20:49.405 [2024-07-25 07:35:21.881636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.881670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.886547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2948 00:20:49.406 [2024-07-25 07:35:21.887211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.887239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.897576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ddc00 00:20:49.406 [2024-07-25 07:35:21.898949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.898976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.904503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f81e0 00:20:49.406 [2024-07-25 07:35:21.905369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.905395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.913129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4140 00:20:49.406 [2024-07-25 07:35:21.913700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.913731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.921578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5ec8 00:20:49.406 [2024-07-25 07:35:21.922348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.922375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.929618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8a50 00:20:49.406 [2024-07-25 07:35:21.930378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.930405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.938875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f1ca0 00:20:49.406 [2024-07-25 07:35:21.939527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.939554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.947013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ebfd0 00:20:49.406 [2024-07-25 07:35:21.947571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.947598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.954605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f35f0 00:20:49.406 [2024-07-25 07:35:21.955249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.955274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.964941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ee190 00:20:49.406 [2024-07-25 07:35:21.966087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.966124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.973674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eaef0 00:20:49.406 [2024-07-25 07:35:21.974823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.979985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eff18 00:20:49.406 [2024-07-25 07:35:21.980589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.980615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.991126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2d80 00:20:49.406 [2024-07-25 07:35:21.992422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:21.992449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:21.999929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8a50 00:20:49.406 [2024-07-25 07:35:22.001170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.001199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.006166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc998 00:20:49.406 [2024-07-25 07:35:22.006858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.006887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.016593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e73e0 00:20:49.406 [2024-07-25 07:35:22.017616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.017648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.023085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ee190 00:20:49.406 [2024-07-25 07:35:22.023688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.023716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.033835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5a90 00:20:49.406 [2024-07-25 07:35:22.034906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.034936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.040740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc998 00:20:49.406 [2024-07-25 07:35:22.041313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.041369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.048946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fb048 00:20:49.406 [2024-07-25 07:35:22.049525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.049550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.058373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e99d8 00:20:49.406 [2024-07-25 07:35:22.058844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.058864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.067414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f46d0 00:20:49.406 [2024-07-25 07:35:22.067988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.068016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.076453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f1430 00:20:49.406 [2024-07-25 07:35:22.077124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.077160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.086313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f9f68 00:20:49.406 [2024-07-25 07:35:22.087790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.087817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.092800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ef6a8 00:20:49.406 [2024-07-25 07:35:22.093508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.093534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.102196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eb328 00:20:49.406 [2024-07-25 07:35:22.102821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.406 [2024-07-25 07:35:22.102850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:49.406 [2024-07-25 07:35:22.110760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0bc0 00:20:49.406 [2024-07-25 07:35:22.111568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.407 [2024-07-25 07:35:22.111595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:49.407 [2024-07-25 07:35:22.119344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190de8a8 00:20:49.407 [2024-07-25 07:35:22.120139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.407 [2024-07-25 07:35:22.120186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:49.407 [2024-07-25 07:35:22.127787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e3498 00:20:49.407 [2024-07-25 07:35:22.128351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.407 [2024-07-25 07:35:22.128379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:49.407 [2024-07-25 07:35:22.135375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0350 00:20:49.407 [2024-07-25 07:35:22.136019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.407 [2024-07-25 07:35:22.136048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.146551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fcdd0 00:20:49.666 [2024-07-25 07:35:22.147897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.147926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.153389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f46d0 00:20:49.666 [2024-07-25 07:35:22.154276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.154303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.162717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f57b0 00:20:49.666 [2024-07-25 07:35:22.163793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.163825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.171821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e23b8 00:20:49.666 [2024-07-25 07:35:22.172532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.172573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.179895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e73e0 00:20:49.666 [2024-07-25 07:35:22.181281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.181306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.189554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e5a90 00:20:49.666 [2024-07-25 07:35:22.190591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.190617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.197740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0bc0 00:20:49.666 [2024-07-25 07:35:22.198797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.198823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.206887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2948 00:20:49.666 [2024-07-25 07:35:22.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.208075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.215908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fc128 00:20:49.666 [2024-07-25 07:35:22.217150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.217176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.222013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f0ff8 00:20:49.666 [2024-07-25 07:35:22.222587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.222613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.231343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f2948 00:20:49.666 [2024-07-25 07:35:22.231756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.231779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.239743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4578 00:20:49.666 [2024-07-25 07:35:22.240179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.240201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.249606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f4b08 00:20:49.666 [2024-07-25 07:35:22.250613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.250640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.258001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ed920 00:20:49.666 [2024-07-25 07:35:22.258676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.258704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.265961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190ea680 00:20:49.666 [2024-07-25 07:35:22.267351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.267377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.273803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fb480 00:20:49.666 [2024-07-25 07:35:22.274350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.666 [2024-07-25 07:35:22.274369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:49.666 [2024-07-25 07:35:22.283878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f3e60 00:20:49.666 [2024-07-25 07:35:22.284929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.284955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.292054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fac10 00:20:49.667 [2024-07-25 07:35:22.293053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.293081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.300873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e4578 00:20:49.667 [2024-07-25 07:35:22.301967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.301993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.308514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190eee38 00:20:49.667 [2024-07-25 07:35:22.309942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.309970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.318667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190df550 00:20:49.667 [2024-07-25 07:35:22.319979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.320005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.325454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fa3a0 00:20:49.667 [2024-07-25 07:35:22.326319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.326346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.334036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e84c0 00:20:49.667 [2024-07-25 07:35:22.334573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.334602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.342839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190edd58 00:20:49.667 [2024-07-25 07:35:22.343490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.343517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.350652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f1ca0 00:20:49.667 [2024-07-25 07:35:22.352025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.352053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.360091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190dfdc0 00:20:49.667 [2024-07-25 07:35:22.361071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.361099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.368189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190e27f0 00:20:49.667 [2024-07-25 07:35:22.369167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.369192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.376770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f3e60 00:20:49.667 [2024-07-25 07:35:22.377438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.377465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.384690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f81e0 00:20:49.667 [2024-07-25 07:35:22.386052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.386080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:49.667 [2024-07-25 07:35:22.392175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f8a50 00:20:49.667 [2024-07-25 07:35:22.392705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.667 [2024-07-25 07:35:22.392732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:49.926 [2024-07-25 07:35:22.403242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f1868 00:20:49.926 [2024-07-25 07:35:22.404384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.926 [2024-07-25 07:35:22.404413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.926 [2024-07-25 07:35:22.410737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190df988 00:20:49.926 [2024-07-25 07:35:22.412234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.926 [2024-07-25 07:35:22.412262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.926 [2024-07-25 07:35:22.418142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190fa3a0 00:20:49.926 [2024-07-25 07:35:22.418796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.926 [2024-07-25 07:35:22.418824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:49.926 [2024-07-25 07:35:22.427548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f320) with pdu=0x2000190f9b30 00:20:49.926 [2024-07-25 07:35:22.428049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.926 [2024-07-25 07:35:22.428070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:49.926 00:20:49.926 Latency(us) 00:20:49.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.926 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:49.926 nvme0n1 : 2.00 28984.69 113.22 0.00 0.00 4410.32 1788.65 12191.41 00:20:49.926 =================================================================================================================== 00:20:49.926 Total : 28984.69 113.22 0.00 0.00 4410.32 1788.65 12191.41 00:20:49.926 0 00:20:49.926 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:49.926 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:49.927 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:49.927 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:49.927 | .driver_specific 00:20:49.927 | .nvme_error 00:20:49.927 | .status_code 00:20:49.927 | .command_transient_transport_error' 00:20:49.927 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 227 > 0 )) 00:20:49.927 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93918 00:20:49.927 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93918 ']' 00:20:49.927 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93918 00:20:49.927 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93918 00:20:50.186 killing process with pid 93918 00:20:50.186 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.186 00:20:50.186 Latency(us) 00:20:50.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.186 =================================================================================================================== 00:20:50.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93918' 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93918 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93918 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94008 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94008 /var/tmp/bperf.sock 00:20:50.186 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:50.187 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94008 ']' 00:20:50.187 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:50.187 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.187 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:50.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:50.187 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.187 07:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:50.446 Zero copy mechanism will not be used. 00:20:50.446 [2024-07-25 07:35:22.926853] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:50.446 [2024-07-25 07:35:22.926992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94008 ] 00:20:50.446 [2024-07-25 07:35:23.050297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.446 [2024-07-25 07:35:23.131514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.015 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.015 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:51.015 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.015 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.274 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:51.274 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.274 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.274 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.274 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.274 07:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.533 nvme0n1 00:20:51.533 07:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:51.533 07:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.533 07:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.533 07:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.533 07:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:51.533 07:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:51.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:51.794 Zero copy mechanism will not be used. 00:20:51.794 Running I/O for 2 seconds... 00:20:51.794 [2024-07-25 07:35:24.321962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.322562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.322658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.327206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.327690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.327781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.332205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.332731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.332817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.337304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.337797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.337876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.342302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.342833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.342918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.347405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.347927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.347954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.352354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.352788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.352810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.357276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.357694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.357719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.362065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.362540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.362565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.366821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.367263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.367285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.371702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.372129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.372159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.376643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.377070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.377092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.381611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.382051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.794 [2024-07-25 07:35:24.382072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.794 [2024-07-25 07:35:24.386535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.794 [2024-07-25 07:35:24.386971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.386992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.391454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.391894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.391915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.396331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.396777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.396797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.401190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.401608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.401629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.405968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.406401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.406426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.410868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.411330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.411351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.415682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.416131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.416154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.420460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.420891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.420914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.425237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.425647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.425667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.430041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.430469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.430501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.434863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.435303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.435323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.439756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.440194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.440215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.444558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.444978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.445002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.449328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.449764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.449784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.454169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.454619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.454648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.458998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.459444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.459468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.463816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.464287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.464308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.468712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.469132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.469152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.473548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.473991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.474011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.478318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.478770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.478793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.483134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.483563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.483584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.487831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.488242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.488262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.492636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.493074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.493120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.497421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.497844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.497864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.502187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.502626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.502648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.506935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.507350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.507371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.511690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.512132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.795 [2024-07-25 07:35:24.512153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:51.795 [2024-07-25 07:35:24.516584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.795 [2024-07-25 07:35:24.517018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.796 [2024-07-25 07:35:24.517040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.796 [2024-07-25 07:35:24.521579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.796 [2024-07-25 07:35:24.522042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.796 [2024-07-25 07:35:24.522067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.796 [2024-07-25 07:35:24.526342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:51.796 [2024-07-25 07:35:24.526771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.796 [2024-07-25 07:35:24.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.531172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.531604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.531651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.535962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.536423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.536449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.540759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.541205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.541227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.545444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.545858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.545880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.550051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.550495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.550521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.554765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.555184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.555205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.559463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.559909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.559938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.564253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.564685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.564706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.568821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.569250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.569270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.573465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.573884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.573905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.578110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.578551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.056 [2024-07-25 07:35:24.578573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.056 [2024-07-25 07:35:24.582821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.056 [2024-07-25 07:35:24.583247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.583268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.587597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.588070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.592282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.592726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.592751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.596942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.597372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.597397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.601668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.602088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.602109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.606413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.606859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.606880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.610978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.611418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.611444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.615781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.616227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.616247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.620560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.620999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.621022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.625332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.625773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.625793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.630050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.630516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.630536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.634892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.635322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.635342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.639695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.640139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.640159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.644351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.644812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.644836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.649089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.649555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.649584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.653788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.654233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.654254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.658460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.658899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.658933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.663220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.663640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.663661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.667829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.668247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.668268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.672464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.672906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.672942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.677168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.677586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.677633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.681781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.682263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.682285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.686532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.686964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.686998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.691211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.057 [2024-07-25 07:35:24.691616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.057 [2024-07-25 07:35:24.691636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.057 [2024-07-25 07:35:24.695832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.696266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.696287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.700529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.700958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.700979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.705201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.705635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.705665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.709825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.710269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.710290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.714542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.714963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.714987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.719324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.719753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.719773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.723994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.724408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.724430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.728834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.729273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.729312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.734064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.734560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.734589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.739365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.739793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.739814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.744247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.744675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.744696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.749095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.749550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.749578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.754009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.754440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.754464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.758813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.759257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.759277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.763559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.764026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.764055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.768437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.768870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.768891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.773205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.773639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.773668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.777971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.778420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.778444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.782756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.783172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.783192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.058 [2024-07-25 07:35:24.787431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.058 [2024-07-25 07:35:24.787871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.058 [2024-07-25 07:35:24.787904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.319 [2024-07-25 07:35:24.792264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.319 [2024-07-25 07:35:24.792695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.319 [2024-07-25 07:35:24.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.319 [2024-07-25 07:35:24.796991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.319 [2024-07-25 07:35:24.797421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.319 [2024-07-25 07:35:24.797443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.319 [2024-07-25 07:35:24.801835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.319 [2024-07-25 07:35:24.802249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.319 [2024-07-25 07:35:24.802269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.319 [2024-07-25 07:35:24.806659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.319 [2024-07-25 07:35:24.807096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.319 [2024-07-25 07:35:24.807145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.319 [2024-07-25 07:35:24.811561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.319 [2024-07-25 07:35:24.812009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.319 [2024-07-25 07:35:24.812030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.319 [2024-07-25 07:35:24.816388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.319 [2024-07-25 07:35:24.816819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.319 [2024-07-25 07:35:24.816839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.319 [2024-07-25 07:35:24.821275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.821703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.821725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.826083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.826547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.826571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.830984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.831430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.831455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.835901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.836346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.836371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.840809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.841237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.841258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.845608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.846050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.846071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.850434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.850881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.850903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.855172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.855591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.855621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.859925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.860353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.860391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.864702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.865143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.865175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.869398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.869834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.869856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.874121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.874624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.874654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.878935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.879399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.879424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.883800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.884251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.884279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.888580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.889045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.889073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.893378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.893821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.893850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.898160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.898578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.898607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.902993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.903489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.903528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.907839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.908294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.908320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.912521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.912973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.913002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.917264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.917719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.917747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.921976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.922423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.922449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.926740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.927188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.927216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.320 [2024-07-25 07:35:24.931565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.320 [2024-07-25 07:35:24.931973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.320 [2024-07-25 07:35:24.932002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.936199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.936633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.936663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.940986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.941433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.941462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.945859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.946307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.946328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.950686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.951092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.951131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.955527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.955964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.955985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.960242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.960673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.960694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.964937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.965388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.965423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.969720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.970153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.970173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.974374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.974818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.974838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.979013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.979439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.979463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.983733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.984174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.984195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.988458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.988878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.988899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.993157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.993594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.993623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:24.997819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:24.998252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:24.998272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.002485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.002937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.002956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.007226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.007649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.007668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.011829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.012255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.012275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.016631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.017081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.017112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.021426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.021855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.021883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.026092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.026555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.026584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.030940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.031392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.031416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.035714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.036161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.036192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.040470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.040882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.321 [2024-07-25 07:35:25.040904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.321 [2024-07-25 07:35:25.045173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.321 [2024-07-25 07:35:25.045603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.322 [2024-07-25 07:35:25.045624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.322 [2024-07-25 07:35:25.049909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.322 [2024-07-25 07:35:25.050368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.322 [2024-07-25 07:35:25.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.582 [2024-07-25 07:35:25.054732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.582 [2024-07-25 07:35:25.055226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.582 [2024-07-25 07:35:25.055251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.582 [2024-07-25 07:35:25.060609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.061045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.061069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.065390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.065814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.065852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.070179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.070636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.070670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.075026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.075492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.075525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.079943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.080396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.080417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.084839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.085299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.085320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.089763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.090228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.090257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.094583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.095013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.095034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.099310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.099747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.099767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.104120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.104534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.104554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.108811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.109262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.109302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.113523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.113962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.113982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.118266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.118704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.118725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.123041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.123488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.123517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.127844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.128295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.128319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.132538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.132950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.132970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.137283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.137705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.137751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.142021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.142473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.142504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.146801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.147229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.147255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.151509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.151927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.151949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.156268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.156685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.156706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.160965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.161407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.161431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.165723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.166180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.166209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.170497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.170918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.170939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.175099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.175547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.175577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.179814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.180256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.180282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.184544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.583 [2024-07-25 07:35:25.184959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.583 [2024-07-25 07:35:25.184981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.583 [2024-07-25 07:35:25.189351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.189756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.189777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.194046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.194539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.194572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.198870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.199343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.199373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.203583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.204009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.204030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.208293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.208707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.208728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.212946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.213384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.213407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.217645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.218069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.218090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.222325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.222781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.222802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.226968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.227408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.227432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.231707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.232166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.232186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.236474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.236907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.236930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.241237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.241652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.241681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.245873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.246316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.246341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.250642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.251077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.251098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.255403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.255854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.255882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.260172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.260604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.260625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.264837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.265273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.265294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.269549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.269973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.269994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.274225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.274672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.274693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.278966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.279406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.279426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.283729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.284160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.284180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.288538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.288959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.288979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.293208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.293647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.293676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.298009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.298457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.298486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.302780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.303212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.303232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.307451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.307858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.307895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.584 [2024-07-25 07:35:25.312192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.584 [2024-07-25 07:35:25.312677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.584 [2024-07-25 07:35:25.312699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.845 [2024-07-25 07:35:25.316940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.845 [2024-07-25 07:35:25.317391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.845 [2024-07-25 07:35:25.317417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.845 [2024-07-25 07:35:25.321716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.845 [2024-07-25 07:35:25.322164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.845 [2024-07-25 07:35:25.322186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.845 [2024-07-25 07:35:25.326454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.845 [2024-07-25 07:35:25.326909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.845 [2024-07-25 07:35:25.326926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.845 [2024-07-25 07:35:25.331169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.845 [2024-07-25 07:35:25.331579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.845 [2024-07-25 07:35:25.331614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.845 [2024-07-25 07:35:25.335783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.845 [2024-07-25 07:35:25.336240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.845 [2024-07-25 07:35:25.336260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.845 [2024-07-25 07:35:25.340512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.845 [2024-07-25 07:35:25.340949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.845 [2024-07-25 07:35:25.340970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.345305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.345756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.345777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.350057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.350521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.350545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.354793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.355239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.355260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.359560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.359996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.360017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.364294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.364731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.364751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.369048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.369531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.369560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.373883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.374333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.374357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.378656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.379090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.379110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.383389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.383800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.383820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.388084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.388551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.388580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.392789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.393230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.393251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.397458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.397878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.397909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.402108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.402562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.402584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.406763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.407187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.407208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.411461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.411910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.411930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.416174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.416605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.416627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.420816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.421275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.421295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.425474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.425909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.425929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.430172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.430615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.430635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.434736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.435159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.435180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.439339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.439788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.439809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.444061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.444532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.444559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.448754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.449193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.449215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.453424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.453816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.846 [2024-07-25 07:35:25.453838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.846 [2024-07-25 07:35:25.458071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.846 [2024-07-25 07:35:25.458535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.458562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.462755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.463165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.463186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.467393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.467816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.467838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.471995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.472453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.472478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.476744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.477181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.477202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.481407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.481798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.481819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.486052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.486511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.486535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.490736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.491162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.491183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.495306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.495733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.495753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.499940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.500392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.500415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.504659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.505102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.509395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.509811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.509833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.514177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.514631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.514653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.519032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.519477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.519502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.523705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.524092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.524123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.528505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.528948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.528969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.533249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.533677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.533702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.537998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.538454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.538487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.542875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.543308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.543328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.547657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.548073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.548094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.552477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.552872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.552894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.557284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.557697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.557720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.561983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.562443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.562468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.566919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.567360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.567381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.571732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.572182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.572201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.847 [2024-07-25 07:35:25.576542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:52.847 [2024-07-25 07:35:25.576970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.847 [2024-07-25 07:35:25.576993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.581415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.581831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.581849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.586286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.586716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.586740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.590975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.591407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.591429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.595646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.596065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.596085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.600421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.600851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.600871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.605109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.605558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.605578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.609870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.610332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.610356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.614649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.615091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.615123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.619384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.619813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.619833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.624125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.624543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.624563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.628834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.629262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.629282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.633521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.633959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.633979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.638217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.638660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.638689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.642888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.643323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.643348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.647590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.648020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.652342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.652768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.652789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.657030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.657479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.657504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.661778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.662242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.662263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.666541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.666968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.666989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.671199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.671642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.671673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.675989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.676449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.676473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.680827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.681269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.681291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.685683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.686118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.686146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.109 [2024-07-25 07:35:25.690492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.109 [2024-07-25 07:35:25.690926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.109 [2024-07-25 07:35:25.690947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.695245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.695668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.695688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.699946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.700386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.700410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.704704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.705150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.709515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.709946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.709967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.714331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.714772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.714792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.719137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.719555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.719576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.723827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.724232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.724253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.728571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.729016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.729037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.733283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.733718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.733739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.737922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.738376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.738401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.742554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.742959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.742979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.747194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.747619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.747640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.751875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.752317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.752343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.756582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.757025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.757046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.761436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.761898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.761922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.766221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.766678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.766712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.771014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.771475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.771500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.775712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.776136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.776156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.780309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.780741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.780761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.784981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.785437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.785461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.789777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.790220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.790243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.794445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.794867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.794888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.799196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.799615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.799636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.803850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.804255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.804276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.808477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.808917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.808937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.813156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.813574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.110 [2024-07-25 07:35:25.813595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.110 [2024-07-25 07:35:25.817859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.110 [2024-07-25 07:35:25.818324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.111 [2024-07-25 07:35:25.818348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.111 [2024-07-25 07:35:25.822540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.111 [2024-07-25 07:35:25.822986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.111 [2024-07-25 07:35:25.823007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.111 [2024-07-25 07:35:25.827237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.111 [2024-07-25 07:35:25.827665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.111 [2024-07-25 07:35:25.827685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.111 [2024-07-25 07:35:25.831916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.111 [2024-07-25 07:35:25.832359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.111 [2024-07-25 07:35:25.832378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.111 [2024-07-25 07:35:25.836600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.111 [2024-07-25 07:35:25.837042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.111 [2024-07-25 07:35:25.837065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.841362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.841808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.841831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.846186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.846665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.846696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.851101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.851569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.851599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.855994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.856452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.856472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.860926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.861356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.861377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.865745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.866194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.866215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.870637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.871070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.871091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.875424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.875867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.875888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.880218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.880649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.372 [2024-07-25 07:35:25.880669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.372 [2024-07-25 07:35:25.885035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.372 [2024-07-25 07:35:25.885468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.885487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.889664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.890081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.890102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.894513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.894970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.894989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.899292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.899724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.899744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.904109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.904544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.904564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.908825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.909264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.909284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.913687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.914134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.914169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.918432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.918907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.918930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.923274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.923721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.923750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.928078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.928516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.928537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.932921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.933348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.933376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.937704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.938157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.938176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.942496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.942971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.942992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.947318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.947769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.947797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.952326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.952796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.952825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.957183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.957614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.957657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.961955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.962388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.962415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.966726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.967206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.967236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.971752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.972187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.972215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.976542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.976980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.977008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.981187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.981617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.981644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.985842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.986297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.986323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.990624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.991057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.991083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:25.995329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:25.995760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:25.995787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:26.000057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:26.000539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:26.000566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:26.004761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:26.005203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:26.005228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:26.009458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:26.009884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.373 [2024-07-25 07:35:26.009912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.373 [2024-07-25 07:35:26.014103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.373 [2024-07-25 07:35:26.014554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.014576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.018745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.019210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.019255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.023431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.023859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.023882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.028060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.028504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.028532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.032746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.033181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.033201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.037337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.037737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.037758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.041966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.042407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.042431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.046633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.047061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.047082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.051276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.051699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.051719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.055865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.056296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.056317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.060564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.060987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.061009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.065187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.065601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.065621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.069835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.070273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.070294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.074507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.074931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.074958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.079168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.079563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.079591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.083808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.084203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.084228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.088421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.088867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.088889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.093169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.093616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.093645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.097884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.098349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.098373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.374 [2024-07-25 07:35:26.102749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.374 [2024-07-25 07:35:26.103202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.374 [2024-07-25 07:35:26.103225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.107550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.107991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.108030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.112299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.112734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.112756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.116943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.117379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.117404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.121625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.122064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.122087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.126460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.126917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.126939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.131153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.131593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.131623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.135768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.136224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.136245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.140488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.140927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.140947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.145174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.145612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.145632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.149723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.150164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.150184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.154391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.154837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.154857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.159097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.159513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.159541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.163707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.164167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.164187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.168328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.168751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.168771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.172949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.173408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.173438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.177693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.178127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.178148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.182342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.182774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.182794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.187016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.187463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.187493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.191812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.192289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.192313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.196484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.196922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.196952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.201095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.201563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.201592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.635 [2024-07-25 07:35:26.205807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.635 [2024-07-25 07:35:26.206243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.635 [2024-07-25 07:35:26.206272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.210455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.210897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.210925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.215163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.215601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.215622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.219849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.220310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.220338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.224592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.225027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.225048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.229305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.229749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.229770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.234049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.234523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.234567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.238796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.239220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.239240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.243466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.243922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.243952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.248085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.248517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.248538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.252688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.253102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.253134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.257419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.257842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.257868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.262052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.262554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.262579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.266849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.267314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.267344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.271468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.271927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.271957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.276084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.276546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.276575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.280917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.281365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.281396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.285628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.286073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.286104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.290320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.290767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.290796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.294979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.295433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.295460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.299661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.300091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.300129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.304277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.304715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.304743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.636 [2024-07-25 07:35:26.308822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x236f660) with pdu=0x2000190fef90 00:20:53.636 [2024-07-25 07:35:26.309140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.636 [2024-07-25 07:35:26.309163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.636 00:20:53.636 Latency(us) 00:20:53.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.636 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:53.636 nvme0n1 : 2.00 6492.60 811.57 0.00 0.00 2460.45 2017.59 8013.14 00:20:53.636 =================================================================================================================== 00:20:53.636 Total : 6492.60 811.57 0.00 0.00 2460.45 2017.59 8013.14 00:20:53.636 0 00:20:53.636 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:53.636 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:53.636 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:53.636 | .driver_specific 00:20:53.636 | .nvme_error 00:20:53.636 | .status_code 00:20:53.636 | .command_transient_transport_error' 00:20:53.636 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 419 > 0 )) 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94008 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94008 ']' 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94008 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94008 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:53.897 killing process with pid 94008 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94008' 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94008 00:20:53.897 Received shutdown signal, test time was about 2.000000 seconds 00:20:53.897 00:20:53.897 Latency(us) 00:20:53.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.897 =================================================================================================================== 00:20:53.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.897 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94008 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93707 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93707 ']' 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93707 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93707 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:54.157 killing process with pid 93707 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93707' 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93707 00:20:54.157 07:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93707 00:20:54.416 00:20:54.416 real 0m17.011s 00:20:54.416 user 0m30.086s 00:20:54.416 sys 0m5.135s 00:20:54.416 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.416 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.416 ************************************ 00:20:54.416 END TEST nvmf_digest_error 00:20:54.416 ************************************ 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:54.676 rmmod nvme_tcp 00:20:54.676 rmmod nvme_fabrics 00:20:54.676 rmmod nvme_keyring 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93707 ']' 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93707 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93707 ']' 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93707 00:20:54.676 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93707) - No such process 00:20:54.676 Process with pid 93707 is not found 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93707 is not found' 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:54.676 00:20:54.676 real 0m34.914s 00:20:54.676 user 1m1.118s 00:20:54.676 sys 0m10.302s 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.676 07:35:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.676 ************************************ 00:20:54.676 END TEST nvmf_digest 00:20:54.676 ************************************ 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.936 ************************************ 00:20:54.936 START TEST nvmf_mdns_discovery 00:20:54.936 ************************************ 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:54.936 * Looking for test storage... 00:20:54.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:54.936 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:54.937 Cannot find device "nvmf_tgt_br" 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.937 Cannot find device "nvmf_tgt_br2" 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:54.937 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:55.196 Cannot find device "nvmf_tgt_br" 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:55.197 Cannot find device "nvmf_tgt_br2" 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:55.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:20:55.197 00:20:55.197 --- 10.0.0.2 ping statistics --- 00:20:55.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.197 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:55.197 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.197 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:20:55.197 00:20:55.197 --- 10.0.0.3 ping statistics --- 00:20:55.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.197 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:20:55.197 00:20:55.197 --- 10.0.0.1 ping statistics --- 00:20:55.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.197 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.197 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94298 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94298 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94298 ']' 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.457 07:35:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.457 [2024-07-25 07:35:28.001261] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:55.457 [2024-07-25 07:35:28.001313] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.457 [2024-07-25 07:35:28.136682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.717 [2024-07-25 07:35:28.219341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.717 [2024-07-25 07:35:28.219384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.717 [2024-07-25 07:35:28.219389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.717 [2024-07-25 07:35:28.219394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.717 [2024-07-25 07:35:28.219398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.717 [2024-07-25 07:35:28.219419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 [2024-07-25 07:35:28.986384] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 07:35:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 [2024-07-25 07:35:28.998420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 null0 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.554 null1 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.554 null2 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.554 null3 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94348 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94348 /tmp/host.sock 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94348 ']' 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.554 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.554 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.554 [2024-07-25 07:35:29.119064] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:20:56.554 [2024-07-25 07:35:29.119135] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94348 ] 00:20:56.554 [2024-07-25 07:35:29.255530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.834 [2024-07-25 07:35:29.398297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.415 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.415 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:57.415 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:57.415 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:57.415 07:35:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:57.415 07:35:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94377 00:20:57.415 07:35:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:57.415 07:35:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:57.415 07:35:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:57.415 Process 982 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:57.415 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:57.415 Successfully dropped root privileges. 00:20:57.415 avahi-daemon 0.8 starting up. 00:20:57.415 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:57.415 Successfully called chroot(). 00:20:57.415 Successfully dropped remaining capabilities. 00:20:57.415 No service file found in /etc/avahi/services. 00:20:57.415 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:57.415 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:57.415 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:57.415 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:57.415 Network interface enumeration completed. 00:20:57.415 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:20:57.415 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:57.415 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:57.415 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.374 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 185756656. 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:58.374 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:58.634 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 [2024-07-25 07:35:31.387175] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 [2024-07-25 07:35:31.434736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.893 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 [2024-07-25 07:35:31.494596] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 [2024-07-25 07:35:31.506549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.894 07:35:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:59.832 [2024-07-25 07:35:32.285452] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:00.401 [2024-07-25 07:35:32.884308] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:00.401 [2024-07-25 07:35:32.884357] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:00.401 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:00.401 cookie is 0 00:21:00.401 is_local: 1 00:21:00.401 our_own: 0 00:21:00.401 wide_area: 0 00:21:00.401 multicast: 1 00:21:00.401 cached: 1 00:21:00.401 [2024-07-25 07:35:32.984103] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:00.401 [2024-07-25 07:35:32.984131] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:00.401 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:00.401 cookie is 0 00:21:00.401 is_local: 1 00:21:00.401 our_own: 0 00:21:00.401 wide_area: 0 00:21:00.401 multicast: 1 00:21:00.401 cached: 1 00:21:00.401 [2024-07-25 07:35:32.984142] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:00.401 [2024-07-25 07:35:33.083905] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:00.401 [2024-07-25 07:35:33.083924] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:00.401 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:00.401 cookie is 0 00:21:00.401 is_local: 1 00:21:00.401 our_own: 0 00:21:00.401 wide_area: 0 00:21:00.401 multicast: 1 00:21:00.401 cached: 1 00:21:00.661 [2024-07-25 07:35:33.183716] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:00.661 [2024-07-25 07:35:33.183740] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:00.661 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:00.661 cookie is 0 00:21:00.661 is_local: 1 00:21:00.661 our_own: 0 00:21:00.661 wide_area: 0 00:21:00.661 multicast: 1 00:21:00.661 cached: 1 00:21:00.661 [2024-07-25 07:35:33.183749] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:21:01.228 [2024-07-25 07:35:33.890613] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:01.228 [2024-07-25 07:35:33.890656] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:01.228 [2024-07-25 07:35:33.890669] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:01.487 [2024-07-25 07:35:33.976592] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:21:01.487 [2024-07-25 07:35:34.033479] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:21:01.487 [2024-07-25 07:35:34.033524] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:21:01.487 [2024-07-25 07:35:34.089953] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:01.487 [2024-07-25 07:35:34.089977] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:01.487 [2024-07-25 07:35:34.089990] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:01.487 [2024-07-25 07:35:34.175880] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:21:01.747 [2024-07-25 07:35:34.232034] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:01.747 [2024-07-25 07:35:34.232083] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.284 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.285 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:21:04.285 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.285 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.285 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.285 07:35:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:21:05.222 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:21:05.222 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:05.223 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:05.223 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.223 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:05.223 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.223 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:05.482 07:35:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.482 [2024-07-25 07:35:38.016539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:05.482 [2024-07-25 07:35:38.016953] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:05.482 [2024-07-25 07:35:38.017009] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:05.482 [2024-07-25 07:35:38.017047] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:05.482 [2024-07-25 07:35:38.017067] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.482 [2024-07-25 07:35:38.024483] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:05.482 [2024-07-25 07:35:38.024935] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:05.482 [2024-07-25 07:35:38.025019] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.482 07:35:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:21:05.482 [2024-07-25 07:35:38.154758] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:21:05.482 [2024-07-25 07:35:38.155026] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:21:05.742 [2024-07-25 07:35:38.216003] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:21:05.742 [2024-07-25 07:35:38.216036] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:21:05.742 [2024-07-25 07:35:38.216042] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:05.742 [2024-07-25 07:35:38.216063] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:05.742 [2024-07-25 07:35:38.216161] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:05.742 [2024-07-25 07:35:38.216170] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:05.742 [2024-07-25 07:35:38.216175] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:05.742 [2024-07-25 07:35:38.216188] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:05.742 [2024-07-25 07:35:38.261656] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:21:05.742 [2024-07-25 07:35:38.261682] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:05.742 [2024-07-25 07:35:38.261723] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:05.742 [2024-07-25 07:35:38.261729] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:06.310 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:21:06.310 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:06.310 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:06.310 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.310 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.310 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:06.310 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:06.569 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.569 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:06.569 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.570 [2024-07-25 07:35:39.263563] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:06.570 [2024-07-25 07:35:39.263627] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:06.570 [2024-07-25 07:35:39.263662] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:06.570 [2024-07-25 07:35:39.263676] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.570 [2024-07-25 07:35:39.270987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.271160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.271174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.271183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.271193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.271201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.271211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.271218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.271226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.570 [2024-07-25 07:35:39.275593] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:06.570 [2024-07-25 07:35:39.275653] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.570 07:35:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:21:06.570 [2024-07-25 07:35:39.280902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.570 [2024-07-25 07:35:39.284336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.284367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.284379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.284387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.284396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.284404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.284413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.570 [2024-07-25 07:35:39.284420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.570 [2024-07-25 07:35:39.284428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.570 [2024-07-25 07:35:39.290914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.570 [2024-07-25 07:35:39.291052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.570 [2024-07-25 07:35:39.291071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.570 [2024-07-25 07:35:39.291085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.570 [2024-07-25 07:35:39.291101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.570 [2024-07-25 07:35:39.291131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.570 [2024-07-25 07:35:39.291141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.570 [2024-07-25 07:35:39.291153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.570 [2024-07-25 07:35:39.291168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.570 [2024-07-25 07:35:39.294268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.570 [2024-07-25 07:35:39.300954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.570 [2024-07-25 07:35:39.301032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.570 [2024-07-25 07:35:39.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.570 [2024-07-25 07:35:39.301053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.570 [2024-07-25 07:35:39.301065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.570 [2024-07-25 07:35:39.301076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.570 [2024-07-25 07:35:39.301082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.570 [2024-07-25 07:35:39.301091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.570 [2024-07-25 07:35:39.301103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.304269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.832 [2024-07-25 07:35:39.304347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.304362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.832 [2024-07-25 07:35:39.304371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.832 [2024-07-25 07:35:39.304386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.832 [2024-07-25 07:35:39.304399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.832 [2024-07-25 07:35:39.304407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.832 [2024-07-25 07:35:39.304416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.832 [2024-07-25 07:35:39.304430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.310982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.832 [2024-07-25 07:35:39.311057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.311073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.832 [2024-07-25 07:35:39.311083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.832 [2024-07-25 07:35:39.311097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.832 [2024-07-25 07:35:39.311110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.832 [2024-07-25 07:35:39.311132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.832 [2024-07-25 07:35:39.311141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.832 [2024-07-25 07:35:39.311154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.314299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.832 [2024-07-25 07:35:39.314369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.314384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.832 [2024-07-25 07:35:39.314393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.832 [2024-07-25 07:35:39.314407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.832 [2024-07-25 07:35:39.314420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.832 [2024-07-25 07:35:39.314427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.832 [2024-07-25 07:35:39.314436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.832 [2024-07-25 07:35:39.314449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.321017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.832 [2024-07-25 07:35:39.321103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.321132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.832 [2024-07-25 07:35:39.321143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.832 [2024-07-25 07:35:39.321157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.832 [2024-07-25 07:35:39.321171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.832 [2024-07-25 07:35:39.321179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.832 [2024-07-25 07:35:39.321188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.832 [2024-07-25 07:35:39.321201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.324325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.832 [2024-07-25 07:35:39.324415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.324431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.832 [2024-07-25 07:35:39.324440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.832 [2024-07-25 07:35:39.324454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.832 [2024-07-25 07:35:39.324467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.832 [2024-07-25 07:35:39.324475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.832 [2024-07-25 07:35:39.324483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.832 [2024-07-25 07:35:39.324496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.331055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.832 [2024-07-25 07:35:39.331140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.331157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.832 [2024-07-25 07:35:39.331167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.832 [2024-07-25 07:35:39.331181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.832 [2024-07-25 07:35:39.331194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.832 [2024-07-25 07:35:39.331201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.832 [2024-07-25 07:35:39.331209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.832 [2024-07-25 07:35:39.331222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.334356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.832 [2024-07-25 07:35:39.334426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.334441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.832 [2024-07-25 07:35:39.334450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.832 [2024-07-25 07:35:39.334463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.832 [2024-07-25 07:35:39.334476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.832 [2024-07-25 07:35:39.334496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.832 [2024-07-25 07:35:39.334505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.832 [2024-07-25 07:35:39.334518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.832 [2024-07-25 07:35:39.341088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.832 [2024-07-25 07:35:39.341180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.832 [2024-07-25 07:35:39.341195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.833 [2024-07-25 07:35:39.341205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.341218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.341231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.341239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.341247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.833 [2024-07-25 07:35:39.341260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.344383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.833 [2024-07-25 07:35:39.344453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.344467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.833 [2024-07-25 07:35:39.344476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.344489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.344502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.344510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.344518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.833 [2024-07-25 07:35:39.344531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.351124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.833 [2024-07-25 07:35:39.351205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.351220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.833 [2024-07-25 07:35:39.351228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.351242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.351254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.351261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.351268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.833 [2024-07-25 07:35:39.351280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.354406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.833 [2024-07-25 07:35:39.354459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.354470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.833 [2024-07-25 07:35:39.354477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.354498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.354525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.354532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.354539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.833 [2024-07-25 07:35:39.354562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.361161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.833 [2024-07-25 07:35:39.361234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.361246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.833 [2024-07-25 07:35:39.361253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.361263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.361273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.361279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.361286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.833 [2024-07-25 07:35:39.361296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.364422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.833 [2024-07-25 07:35:39.364496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.364508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.833 [2024-07-25 07:35:39.364516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.364527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.364591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.364600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.364606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.833 [2024-07-25 07:35:39.364616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.371185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.833 [2024-07-25 07:35:39.371258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.371270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.833 [2024-07-25 07:35:39.371279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.371290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.371301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.371307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.371313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.833 [2024-07-25 07:35:39.371322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.374444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.833 [2024-07-25 07:35:39.374518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.374530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.833 [2024-07-25 07:35:39.374537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.374548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.374569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.374577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.374583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.833 [2024-07-25 07:35:39.374592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.381206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.833 [2024-07-25 07:35:39.381258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.381268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.833 [2024-07-25 07:35:39.381275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.381285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.381295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.381300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.381307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.833 [2024-07-25 07:35:39.381316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.384458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.833 [2024-07-25 07:35:39.384570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.384583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.833 [2024-07-25 07:35:39.384591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.833 [2024-07-25 07:35:39.384602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.833 [2024-07-25 07:35:39.384627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.833 [2024-07-25 07:35:39.384633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.833 [2024-07-25 07:35:39.384639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.833 [2024-07-25 07:35:39.384650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.833 [2024-07-25 07:35:39.391223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.833 [2024-07-25 07:35:39.391274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.833 [2024-07-25 07:35:39.391285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.833 [2024-07-25 07:35:39.391292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.834 [2024-07-25 07:35:39.391303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.834 [2024-07-25 07:35:39.391312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.834 [2024-07-25 07:35:39.391318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.834 [2024-07-25 07:35:39.391324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.834 [2024-07-25 07:35:39.391334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.834 [2024-07-25 07:35:39.394531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.834 [2024-07-25 07:35:39.394582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.834 [2024-07-25 07:35:39.394593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.834 [2024-07-25 07:35:39.394600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.834 [2024-07-25 07:35:39.394610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.834 [2024-07-25 07:35:39.394631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.834 [2024-07-25 07:35:39.394638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.834 [2024-07-25 07:35:39.394644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.834 [2024-07-25 07:35:39.394654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.834 [2024-07-25 07:35:39.401238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:06.834 [2024-07-25 07:35:39.401295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.834 [2024-07-25 07:35:39.401306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded6b0 with addr=10.0.0.2, port=4420 00:21:06.834 [2024-07-25 07:35:39.401313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded6b0 is same with the state(5) to be set 00:21:06.834 [2024-07-25 07:35:39.401325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded6b0 (9): Bad file descriptor 00:21:06.834 [2024-07-25 07:35:39.401335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:06.834 [2024-07-25 07:35:39.401341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:06.834 [2024-07-25 07:35:39.401347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:06.834 [2024-07-25 07:35:39.401356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.834 [2024-07-25 07:35:39.404545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:06.834 [2024-07-25 07:35:39.404596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.834 [2024-07-25 07:35:39.404606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ded380 with addr=10.0.0.3, port=4420 00:21:06.834 [2024-07-25 07:35:39.404612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ded380 is same with the state(5) to be set 00:21:06.834 [2024-07-25 07:35:39.404624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ded380 (9): Bad file descriptor 00:21:06.834 [2024-07-25 07:35:39.404644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:06.834 [2024-07-25 07:35:39.404650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:06.834 [2024-07-25 07:35:39.404656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:06.834 [2024-07-25 07:35:39.404665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.834 [2024-07-25 07:35:39.405649] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:21:06.834 [2024-07-25 07:35:39.405673] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:06.834 [2024-07-25 07:35:39.405693] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:06.834 [2024-07-25 07:35:39.406665] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:06.834 [2024-07-25 07:35:39.406687] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:06.834 [2024-07-25 07:35:39.406700] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:06.834 [2024-07-25 07:35:39.491558] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:06.834 [2024-07-25 07:35:39.492547] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.817 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.075 [2024-07-25 07:35:40.571328] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.075 07:35:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:09.014 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.274 [2024-07-25 07:35:41.798623] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:09.274 2024/07/25 07:35:41 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:09.274 request: 00:21:09.274 { 00:21:09.274 "method": "bdev_nvme_start_mdns_discovery", 00:21:09.274 "params": { 00:21:09.274 "name": "mdns", 00:21:09.274 "svcname": "_nvme-disc._http", 00:21:09.274 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:09.274 } 00:21:09.274 } 00:21:09.274 Got JSON-RPC error response 00:21:09.274 GoRPCClient: error on JSON-RPC call 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:09.274 07:35:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:21:09.841 [2024-07-25 07:35:42.382217] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:09.841 [2024-07-25 07:35:42.482024] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:10.100 [2024-07-25 07:35:42.581839] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.100 [2024-07-25 07:35:42.581898] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:10.100 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.100 cookie is 0 00:21:10.100 is_local: 1 00:21:10.100 our_own: 0 00:21:10.100 wide_area: 0 00:21:10.100 multicast: 1 00:21:10.100 cached: 1 00:21:10.100 [2024-07-25 07:35:42.681653] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.100 [2024-07-25 07:35:42.681771] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:10.100 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.100 cookie is 0 00:21:10.100 is_local: 1 00:21:10.100 our_own: 0 00:21:10.100 wide_area: 0 00:21:10.100 multicast: 1 00:21:10.100 cached: 1 00:21:10.100 [2024-07-25 07:35:42.681826] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:10.100 [2024-07-25 07:35:42.781460] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.100 [2024-07-25 07:35:42.781554] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:10.101 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.101 cookie is 0 00:21:10.101 is_local: 1 00:21:10.101 our_own: 0 00:21:10.101 wide_area: 0 00:21:10.101 multicast: 1 00:21:10.101 cached: 1 00:21:10.360 [2024-07-25 07:35:42.881264] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.360 [2024-07-25 07:35:42.881356] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:10.360 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.360 cookie is 0 00:21:10.360 is_local: 1 00:21:10.360 our_own: 0 00:21:10.360 wide_area: 0 00:21:10.360 multicast: 1 00:21:10.360 cached: 1 00:21:10.360 [2024-07-25 07:35:42.881406] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:21:10.929 [2024-07-25 07:35:43.593202] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:10.929 [2024-07-25 07:35:43.593361] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:10.929 [2024-07-25 07:35:43.593386] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:11.188 [2024-07-25 07:35:43.681168] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:21:11.188 [2024-07-25 07:35:43.748472] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:21:11.188 [2024-07-25 07:35:43.748512] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:11.188 [2024-07-25 07:35:43.792666] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:11.188 [2024-07-25 07:35:43.792693] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:11.188 [2024-07-25 07:35:43.792706] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:11.188 [2024-07-25 07:35:43.878582] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:21:11.458 [2024-07-25 07:35:43.938376] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:11.458 [2024-07-25 07:35:43.938402] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:14.750 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:14.751 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.751 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.751 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.751 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.751 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:14.751 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.751 07:35:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.751 [2024-07-25 07:35:46.994894] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:14.751 2024/07/25 07:35:46 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:14.751 request: 00:21:14.751 { 00:21:14.751 "method": "bdev_nvme_start_mdns_discovery", 00:21:14.751 "params": { 00:21:14.751 "name": "cdc", 00:21:14.751 "svcname": "_nvme-disc._tcp", 00:21:14.751 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:14.751 } 00:21:14.751 } 00:21:14.751 Got JSON-RPC error response 00:21:14.751 GoRPCClient: error on JSON-RPC call 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94348 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94348 00:21:14.751 [2024-07-25 07:35:47.289121] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94377 00:21:14.751 Got SIGTERM, quitting. 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.751 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:21:14.751 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:21:14.751 avahi-daemon 0.8 exiting. 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.751 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.751 rmmod nvme_tcp 00:21:14.751 rmmod nvme_fabrics 00:21:15.010 rmmod nvme_keyring 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94298 ']' 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94298 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94298 ']' 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94298 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94298 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94298' 00:21:15.010 killing process with pid 94298 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94298 00:21:15.010 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94298 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:15.270 ************************************ 00:21:15.270 END TEST nvmf_mdns_discovery 00:21:15.270 00:21:15.270 real 0m20.388s 00:21:15.270 user 0m39.662s 00:21:15.270 sys 0m2.030s 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.270 ************************************ 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.270 ************************************ 00:21:15.270 START TEST nvmf_host_multipath 00:21:15.270 ************************************ 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:15.270 * Looking for test storage... 00:21:15.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:15.270 07:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.530 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:15.531 Cannot find device "nvmf_tgt_br" 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:15.531 Cannot find device "nvmf_tgt_br2" 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:15.531 Cannot find device "nvmf_tgt_br" 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:15.531 Cannot find device "nvmf_tgt_br2" 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:15.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:15.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:15.531 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:15.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:21:15.791 00:21:15.791 --- 10.0.0.2 ping statistics --- 00:21:15.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.791 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:15.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:15.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:21:15.791 00:21:15.791 --- 10.0.0.3 ping statistics --- 00:21:15.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.791 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:15.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:15.791 00:21:15.791 --- 10.0.0.1 ping statistics --- 00:21:15.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.791 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94939 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94939 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94939 ']' 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.791 07:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:15.791 [2024-07-25 07:35:48.452871] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:21:15.791 [2024-07-25 07:35:48.452946] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.051 [2024-07-25 07:35:48.594995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:16.051 [2024-07-25 07:35:48.706496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.051 [2024-07-25 07:35:48.706562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.051 [2024-07-25 07:35:48.706570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.051 [2024-07-25 07:35:48.706575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.051 [2024-07-25 07:35:48.706580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.051 [2024-07-25 07:35:48.706820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.051 [2024-07-25 07:35:48.706825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.619 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.619 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:16.619 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.619 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.619 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:16.620 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.620 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94939 00:21:16.620 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:16.879 [2024-07-25 07:35:49.492713] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.879 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:17.138 Malloc0 00:21:17.138 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:17.397 07:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:17.397 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.656 [2024-07-25 07:35:50.286870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.656 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:17.915 [2024-07-25 07:35:50.490638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:17.915 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95037 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95037 /var/tmp/bdevperf.sock 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95037 ']' 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.916 07:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:18.853 07:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.853 07:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:18.853 07:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:19.112 07:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:19.371 Nvme0n1 00:21:19.371 07:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:19.630 Nvme0n1 00:21:19.630 07:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:19.630 07:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:20.566 07:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:20.566 07:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:20.825 07:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:21.084 07:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:21.084 07:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95124 00:21:21.084 07:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:21.084 07:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:27.667 Attaching 4 probes... 00:21:27.667 @path[10.0.0.2, 4421]: 19568 00:21:27.667 @path[10.0.0.2, 4421]: 19581 00:21:27.667 @path[10.0.0.2, 4421]: 19483 00:21:27.667 @path[10.0.0.2, 4421]: 19506 00:21:27.667 @path[10.0.0.2, 4421]: 19708 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95124 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:27.667 07:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:27.667 07:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:27.667 07:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:27.667 07:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:27.667 07:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95260 00:21:27.667 07:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.228 Attaching 4 probes... 00:21:34.228 @path[10.0.0.2, 4420]: 20885 00:21:34.228 @path[10.0.0.2, 4420]: 20960 00:21:34.228 @path[10.0.0.2, 4420]: 21186 00:21:34.228 @path[10.0.0.2, 4420]: 21436 00:21:34.228 @path[10.0.0.2, 4420]: 21303 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95260 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:34.228 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:34.229 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:34.229 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:34.229 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:34.229 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95385 00:21:34.229 07:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:40.795 07:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:40.795 07:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.795 Attaching 4 probes... 00:21:40.795 @path[10.0.0.2, 4421]: 13915 00:21:40.795 @path[10.0.0.2, 4421]: 19837 00:21:40.795 @path[10.0.0.2, 4421]: 19880 00:21:40.795 @path[10.0.0.2, 4421]: 19846 00:21:40.795 @path[10.0.0.2, 4421]: 19913 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95385 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:40.795 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:40.796 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95515 00:21:40.796 07:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.364 Attaching 4 probes... 00:21:47.364 00:21:47.364 00:21:47.364 00:21:47.364 00:21:47.364 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95515 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:47.364 07:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:47.364 07:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:47.364 07:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95651 00:21:47.364 07:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:47.364 07:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.956 Attaching 4 probes... 00:21:53.956 @path[10.0.0.2, 4421]: 19324 00:21:53.956 @path[10.0.0.2, 4421]: 19178 00:21:53.956 @path[10.0.0.2, 4421]: 19082 00:21:53.956 @path[10.0.0.2, 4421]: 19062 00:21:53.956 @path[10.0.0.2, 4421]: 19672 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95651 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:53.956 [2024-07-25 07:36:26.423097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 [2024-07-25 07:36:26.423148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 [2024-07-25 07:36:26.423155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 [2024-07-25 07:36:26.423160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 [2024-07-25 07:36:26.423165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 [2024-07-25 07:36:26.423169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 [2024-07-25 07:36:26.423175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 [2024-07-25 07:36:26.423179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff330 is same with the state(5) to be set 00:21:53.956 07:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:54.896 07:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:54.896 07:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95781 00:21:54.896 07:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:54.896 07:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.467 Attaching 4 probes... 00:22:01.467 @path[10.0.0.2, 4420]: 20126 00:22:01.467 @path[10.0.0.2, 4420]: 20833 00:22:01.467 @path[10.0.0.2, 4420]: 22348 00:22:01.467 @path[10.0.0.2, 4420]: 22352 00:22:01.467 @path[10.0.0.2, 4420]: 22332 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95781 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.467 [2024-07-25 07:36:33.838625] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:01.467 07:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:01.467 07:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:08.045 07:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:08.045 07:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95973 00:22:08.045 07:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:08.045 07:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:14.623 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:14.623 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.624 Attaching 4 probes... 00:22:14.624 @path[10.0.0.2, 4421]: 18572 00:22:14.624 @path[10.0.0.2, 4421]: 18557 00:22:14.624 @path[10.0.0.2, 4421]: 18308 00:22:14.624 @path[10.0.0.2, 4421]: 18227 00:22:14.624 @path[10.0.0.2, 4421]: 18204 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95973 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95037 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95037 ']' 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95037 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95037 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95037' 00:22:14.624 killing process with pid 95037 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95037 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95037 00:22:14.624 Connection closed with partial response: 00:22:14.624 00:22:14.624 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95037 00:22:14.624 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:14.624 [2024-07-25 07:35:50.548233] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:14.624 [2024-07-25 07:35:50.548317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95037 ] 00:22:14.624 [2024-07-25 07:35:50.685535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.624 [2024-07-25 07:35:50.786898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.624 Running I/O for 90 seconds... 00:22:14.624 [2024-07-25 07:36:00.199752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.624 [2024-07-25 07:36:00.199817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.199861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.199872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.199886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.199896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.199909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.199917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.199930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.199938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.199951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.199959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.199972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.199980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.199993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.624 [2024-07-25 07:36:00.200952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.624 [2024-07-25 07:36:00.200967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.200975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.200989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.200998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.625 [2024-07-25 07:36:00.201779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.201978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.625 [2024-07-25 07:36:00.201989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.625 [2024-07-25 07:36:00.202005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.202979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.202996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.203990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.203999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.204013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.204022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.204036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.204044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.204058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.204066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.204079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.204088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.204101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.204110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.626 [2024-07-25 07:36:00.204124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.626 [2024-07-25 07:36:00.204132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.204561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.204569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.205451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.205466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.206345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.206366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.206384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.206394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.206411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.206421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.206438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.206448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.206463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.206474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.206498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.206508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:00.206525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:00.206534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.616801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.627 [2024-07-25 07:36:06.616860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.616904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.616930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.616940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.616954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.616963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.616977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.627 [2024-07-25 07:36:06.617200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.627 [2024-07-25 07:36:06.617214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.617975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.617985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.628 [2024-07-25 07:36:06.618592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.628 [2024-07-25 07:36:06.618610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.618977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.618987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.629 [2024-07-25 07:36:06.619713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.629 [2024-07-25 07:36:06.619726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:06.619742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:06.619750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:06.619766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:06.619775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:06.619791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:06.619799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:06.619815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:06.619824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:06.619841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:06.619861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:06.620875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:06.620892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.406942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.407474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.407483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.408739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.408764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.408788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.408811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.408835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.408859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.630 [2024-07-25 07:36:13.408884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:13.408907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:13.408931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:13.408961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.408976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:13.408985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.409000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:13.409008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.409023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-07-25 07:36:13.409032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.630 [2024-07-25 07:36:13.409047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.631 [2024-07-25 07:36:13.409340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.409981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-07-25 07:36:13.409989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.631 [2024-07-25 07:36:13.410005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.632 [2024-07-25 07:36:13.410469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.410974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.410982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.411001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.411010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.411029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.411038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.411061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.411070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.411089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.411099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.411128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-07-25 07:36:13.411137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.632 [2024-07-25 07:36:13.411155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:13.411879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:13.411887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:26.424155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.633 [2024-07-25 07:36:26.424211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:26.424232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.633 [2024-07-25 07:36:26.424242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:26.424253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.633 [2024-07-25 07:36:26.424262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:26.424272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.633 [2024-07-25 07:36:26.424281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.633 [2024-07-25 07:36:26.424291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.634 [2024-07-25 07:36:26.424940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.634 [2024-07-25 07:36:26.424949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.424957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.424966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.424974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.424983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.424990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.635 [2024-07-25 07:36:26.425291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.635 [2024-07-25 07:36:26.425602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.635 [2024-07-25 07:36:26.425610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.425984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.425993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.636 [2024-07-25 07:36:26.426161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.636 [2024-07-25 07:36:26.426197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1128 len:8 PRP1 0x0 PRP2 0x0 00:22:14.636 [2024-07-25 07:36:26.426205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.636 [2024-07-25 07:36:26.426226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.636 [2024-07-25 07:36:26.426235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1136 len:8 PRP1 0x0 PRP2 0x0 00:22:14.636 [2024-07-25 07:36:26.426243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.636 [2024-07-25 07:36:26.426257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.636 [2024-07-25 07:36:26.426263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1144 len:8 PRP1 0x0 PRP2 0x0 00:22:14.636 [2024-07-25 07:36:26.426271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.636 [2024-07-25 07:36:26.426284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.636 [2024-07-25 07:36:26.426290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:8 PRP1 0x0 PRP2 0x0 00:22:14.636 [2024-07-25 07:36:26.426297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.636 [2024-07-25 07:36:26.426312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.636 [2024-07-25 07:36:26.426318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1160 len:8 PRP1 0x0 PRP2 0x0 00:22:14.636 [2024-07-25 07:36:26.426326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.636 [2024-07-25 07:36:26.426336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.636 [2024-07-25 07:36:26.426343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.636 [2024-07-25 07:36:26.426349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1168 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1176 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1192 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1200 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1208 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1224 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1232 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:304 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:312 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:328 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.426711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.426718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:336 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.426725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.426733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.446564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.446600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:344 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.446614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.446631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.637 [2024-07-25 07:36:26.446640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.637 [2024-07-25 07:36:26.446649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:8 PRP1 0x0 PRP2 0x0 00:22:14.637 [2024-07-25 07:36:26.446660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.446734] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a8250 was disconnected and freed. reset controller. 00:22:14.637 [2024-07-25 07:36:26.446846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.637 [2024-07-25 07:36:26.446865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.446879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.637 [2024-07-25 07:36:26.446890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.446903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.637 [2024-07-25 07:36:26.446914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.446926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.637 [2024-07-25 07:36:26.446937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.637 [2024-07-25 07:36:26.446949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21249c0 is same with the state(5) to be set 00:22:14.637 [2024-07-25 07:36:26.448644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.637 [2024-07-25 07:36:26.448686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21249c0 (9): Bad file descriptor 00:22:14.637 [2024-07-25 07:36:26.448816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.637 [2024-07-25 07:36:26.448837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21249c0 with addr=10.0.0.2, port=4421 00:22:14.637 [2024-07-25 07:36:26.448851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21249c0 is same with the state(5) to be set 00:22:14.637 [2024-07-25 07:36:26.448871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21249c0 (9): Bad file descriptor 00:22:14.637 [2024-07-25 07:36:26.448889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.637 [2024-07-25 07:36:26.448901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:14.637 [2024-07-25 07:36:26.448915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.637 [2024-07-25 07:36:26.448939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.638 [2024-07-25 07:36:26.448950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.638 [2024-07-25 07:36:36.465219] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:14.638 Received shutdown signal, test time was about 54.124499 seconds 00:22:14.638 00:22:14.638 Latency(us) 00:22:14.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.638 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:14.638 Verification LBA range: start 0x0 length 0x4000 00:22:14.638 Nvme0n1 : 54.12 8542.40 33.37 0.00 0.00 14965.33 683.26 7033243.39 00:22:14.638 =================================================================================================================== 00:22:14.638 Total : 8542.40 33.37 0.00 0.00 14965.33 683.26 7033243.39 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.638 rmmod nvme_tcp 00:22:14.638 rmmod nvme_fabrics 00:22:14.638 rmmod nvme_keyring 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.638 07:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94939 ']' 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94939 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94939 ']' 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94939 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94939 00:22:14.638 killing process with pid 94939 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94939' 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94939 00:22:14.638 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94939 00:22:14.896 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:14.897 00:22:14.897 real 0m59.576s 00:22:14.897 user 2m50.490s 00:22:14.897 sys 0m10.845s 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:14.897 ************************************ 00:22:14.897 END TEST nvmf_host_multipath 00:22:14.897 ************************************ 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.897 ************************************ 00:22:14.897 START TEST nvmf_timeout 00:22:14.897 ************************************ 00:22:14.897 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:14.897 * Looking for test storage... 00:22:15.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:15.156 Cannot find device "nvmf_tgt_br" 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:15.156 Cannot find device "nvmf_tgt_br2" 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:15.156 Cannot find device "nvmf_tgt_br" 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:15.156 Cannot find device "nvmf_tgt_br2" 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:15.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:15.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:15.156 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:15.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:22:15.414 00:22:15.414 --- 10.0.0.2 ping statistics --- 00:22:15.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.414 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:15.414 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:15.414 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:22:15.414 00:22:15.414 --- 10.0.0.3 ping statistics --- 00:22:15.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.414 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:15.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:22:15.414 00:22:15.414 --- 10.0.0.1 ping statistics --- 00:22:15.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.414 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.414 07:36:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:22:15.414 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.414 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.414 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.414 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96298 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96298 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96298 ']' 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.415 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:15.415 [2024-07-25 07:36:48.097766] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:15.415 [2024-07-25 07:36:48.097833] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.671 [2024-07-25 07:36:48.238378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:15.671 [2024-07-25 07:36:48.352453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.671 [2024-07-25 07:36:48.352529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.671 [2024-07-25 07:36:48.352537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.671 [2024-07-25 07:36:48.352543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.671 [2024-07-25 07:36:48.352547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.671 [2024-07-25 07:36:48.352849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.671 [2024-07-25 07:36:48.352854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.238 07:36:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:16.496 [2024-07-25 07:36:49.133893] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.496 07:36:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:16.755 Malloc0 00:22:16.755 07:36:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.014 07:36:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.272 07:36:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.531 [2024-07-25 07:36:50.020764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96385 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96385 /var/tmp/bdevperf.sock 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96385 ']' 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.531 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:17.531 [2024-07-25 07:36:50.083881] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:17.531 [2024-07-25 07:36:50.083967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96385 ] 00:22:17.531 [2024-07-25 07:36:50.206280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.789 [2024-07-25 07:36:50.302723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.356 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.356 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:18.356 07:36:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:18.615 07:36:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:18.873 NVMe0n1 00:22:18.873 07:36:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:18.873 07:36:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96438 00:22:18.873 07:36:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:18.873 Running I/O for 10 seconds... 00:22:19.810 07:36:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.073 [2024-07-25 07:36:52.625241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.073 [2024-07-25 07:36:52.625287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.073 [2024-07-25 07:36:52.625294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.073 [2024-07-25 07:36:52.625299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.074 [2024-07-25 07:36:52.625663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.625845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df730 is same with the state(5) to be set 00:22:20.075 [2024-07-25 07:36:52.627110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.075 [2024-07-25 07:36:52.627449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.075 [2024-07-25 07:36:52.627485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.627989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.627996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.628004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.628009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.628017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.628023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.628030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.076 [2024-07-25 07:36:52.628036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.628048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.076 [2024-07-25 07:36:52.628054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.628062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.076 [2024-07-25 07:36:52.628067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.076 [2024-07-25 07:36:52.628074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.077 [2024-07-25 07:36:52.628652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.077 [2024-07-25 07:36:52.628659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.078 [2024-07-25 07:36:52.628948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.078 [2024-07-25 07:36:52.628979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110456 len:8 PRP1 0x0 PRP2 0x0 00:22:20.078 [2024-07-25 07:36:52.628984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.628993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.078 [2024-07-25 07:36:52.629003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.078 [2024-07-25 07:36:52.629008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110464 len:8 PRP1 0x0 PRP2 0x0 00:22:20.078 [2024-07-25 07:36:52.629013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.629019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.078 [2024-07-25 07:36:52.629023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.078 [2024-07-25 07:36:52.629028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110472 len:8 PRP1 0x0 PRP2 0x0 00:22:20.078 [2024-07-25 07:36:52.629033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.629038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.078 [2024-07-25 07:36:52.629042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.078 [2024-07-25 07:36:52.629047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110480 len:8 PRP1 0x0 PRP2 0x0 00:22:20.078 [2024-07-25 07:36:52.629058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.629064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.078 [2024-07-25 07:36:52.629070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.078 [2024-07-25 07:36:52.629075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110488 len:8 PRP1 0x0 PRP2 0x0 00:22:20.078 [2024-07-25 07:36:52.629080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.078 [2024-07-25 07:36:52.629085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.078 [2024-07-25 07:36:52.629089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.078 [2024-07-25 07:36:52.629093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110496 len:8 PRP1 0x0 PRP2 0x0 00:22:20.078 [2024-07-25 07:36:52.629098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.079 [2024-07-25 07:36:52.629104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.079 [2024-07-25 07:36:52.629122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.079 [2024-07-25 07:36:52.629128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110504 len:8 PRP1 0x0 PRP2 0x0 00:22:20.079 [2024-07-25 07:36:52.629133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.079 [2024-07-25 07:36:52.629140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.079 [2024-07-25 07:36:52.629144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.079 [2024-07-25 07:36:52.629149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110512 len:8 PRP1 0x0 PRP2 0x0 00:22:20.079 [2024-07-25 07:36:52.629153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.079 [2024-07-25 07:36:52.629159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.079 [2024-07-25 07:36:52.629175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.079 [2024-07-25 07:36:52.629179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110520 len:8 PRP1 0x0 PRP2 0x0 00:22:20.079 [2024-07-25 07:36:52.629184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.079 [2024-07-25 07:36:52.629192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.079 [2024-07-25 07:36:52.629197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.079 [2024-07-25 07:36:52.629201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110528 len:8 PRP1 0x0 PRP2 0x0 00:22:20.079 [2024-07-25 07:36:52.629207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.079 [2024-07-25 07:36:52.629255] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb078d0 was disconnected and freed. reset controller. 00:22:20.079 [2024-07-25 07:36:52.629484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.079 [2024-07-25 07:36:52.629551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9a240 (9): Bad file descriptor 00:22:20.079 [2024-07-25 07:36:52.629623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.079 [2024-07-25 07:36:52.639111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9a240 with addr=10.0.0.2, port=4420 00:22:20.079 [2024-07-25 07:36:52.639143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9a240 is same with the state(5) to be set 00:22:20.079 [2024-07-25 07:36:52.639162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9a240 (9): Bad file descriptor 00:22:20.079 [2024-07-25 07:36:52.639190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:20.079 [2024-07-25 07:36:52.639197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:20.079 [2024-07-25 07:36:52.639205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.079 [2024-07-25 07:36:52.639223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:20.079 [2024-07-25 07:36:52.639230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.079 07:36:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:21.985 [2024-07-25 07:36:54.635494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.985 [2024-07-25 07:36:54.635556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9a240 with addr=10.0.0.2, port=4420 00:22:21.985 [2024-07-25 07:36:54.635567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9a240 is same with the state(5) to be set 00:22:21.985 [2024-07-25 07:36:54.635602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9a240 (9): Bad file descriptor 00:22:21.985 [2024-07-25 07:36:54.635614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:21.985 [2024-07-25 07:36:54.635620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:21.985 [2024-07-25 07:36:54.635627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:21.985 [2024-07-25 07:36:54.635648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.985 [2024-07-25 07:36:54.635655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:21.985 07:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:21.985 07:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.985 07:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:22.245 07:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:22.245 07:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:22.245 07:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:22.245 07:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:22.504 07:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:22.504 07:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:24.410 [2024-07-25 07:36:56.631916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.410 [2024-07-25 07:36:56.631954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9a240 with addr=10.0.0.2, port=4420 00:22:24.410 [2024-07-25 07:36:56.631965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9a240 is same with the state(5) to be set 00:22:24.410 [2024-07-25 07:36:56.631983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9a240 (9): Bad file descriptor 00:22:24.410 [2024-07-25 07:36:56.631996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:24.410 [2024-07-25 07:36:56.632001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:24.410 [2024-07-25 07:36:56.632008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.410 [2024-07-25 07:36:56.632028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:24.410 [2024-07-25 07:36:56.632035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.317 [2024-07-25 07:36:58.628213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.317 [2024-07-25 07:36:58.628254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.317 [2024-07-25 07:36:58.628261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:26.317 [2024-07-25 07:36:58.628269] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:26.317 [2024-07-25 07:36:58.628289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.886 00:22:26.886 Latency(us) 00:22:26.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.886 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:26.886 Verification LBA range: start 0x0 length 0x4000 00:22:26.886 NVMe0n1 : 8.12 1686.11 6.59 15.77 0.00 75285.18 1731.41 7033243.39 00:22:26.886 =================================================================================================================== 00:22:26.886 Total : 1686.11 6.59 15.77 0.00 75285.18 1731.41 7033243.39 00:22:27.145 0 00:22:27.405 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:27.405 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.405 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:27.664 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:27.664 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:27.664 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:27.664 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96438 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96385 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96385 ']' 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96385 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96385 00:22:27.923 killing process with pid 96385 00:22:27.923 Received shutdown signal, test time was about 8.995230 seconds 00:22:27.923 00:22:27.923 Latency(us) 00:22:27.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.923 =================================================================================================================== 00:22:27.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96385' 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96385 00:22:27.923 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96385 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.182 [2024-07-25 07:37:00.836069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96590 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96590 /var/tmp/bdevperf.sock 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96590 ']' 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.182 07:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:28.182 [2024-07-25 07:37:00.891824] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:28.182 [2024-07-25 07:37:00.891889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96590 ] 00:22:28.441 [2024-07-25 07:37:01.029384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.441 [2024-07-25 07:37:01.122650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.031 07:37:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.031 07:37:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:29.031 07:37:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:29.303 07:37:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:29.563 NVMe0n1 00:22:29.563 07:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96632 00:22:29.563 07:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:29.563 07:37:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:29.563 Running I/O for 10 seconds... 00:22:30.500 07:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.762 [2024-07-25 07:37:03.382235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.762 [2024-07-25 07:37:03.382436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.382571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637e10 is same with the state(5) to be set 00:22:30.763 [2024-07-25 07:37:03.383302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.763 [2024-07-25 07:37:03.383750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.763 [2024-07-25 07:37:03.383757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.764 [2024-07-25 07:37:03.383947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.383960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.383973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.383986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.383993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.764 [2024-07-25 07:37:03.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.764 [2024-07-25 07:37:03.384359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.765 [2024-07-25 07:37:03.384924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.765 [2024-07-25 07:37:03.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.384942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.384949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.384954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.384962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.384968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.384974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.384980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.384987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.384993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.766 [2024-07-25 07:37:03.385120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105416 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105424 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105432 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105440 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105448 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105456 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105464 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104760 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.766 [2024-07-25 07:37:03.385343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.766 [2024-07-25 07:37:03.385352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104768 len:8 PRP1 0x0 PRP2 0x0 00:22:30.766 [2024-07-25 07:37:03.385358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385408] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1abdb20 was disconnected and freed. reset controller. 00:22:30.766 [2024-07-25 07:37:03.385494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.766 [2024-07-25 07:37:03.385509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.766 [2024-07-25 07:37:03.385524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.385531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.766 [2024-07-25 07:37:03.403946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.403971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.766 [2024-07-25 07:37:03.403983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.766 [2024-07-25 07:37:03.403993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:30.766 [2024-07-25 07:37:03.404305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.766 [2024-07-25 07:37:03.404329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:30.766 07:37:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:30.766 [2024-07-25 07:37:03.404444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.766 [2024-07-25 07:37:03.404464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50240 with addr=10.0.0.2, port=4420 00:22:30.766 [2024-07-25 07:37:03.404475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:30.766 [2024-07-25 07:37:03.404493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:30.766 [2024-07-25 07:37:03.404509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.766 [2024-07-25 07:37:03.404518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.766 [2024-07-25 07:37:03.404528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.766 [2024-07-25 07:37:03.404550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.766 [2024-07-25 07:37:03.404561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.703 [2024-07-25 07:37:04.402746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.703 [2024-07-25 07:37:04.402798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50240 with addr=10.0.0.2, port=4420 00:22:31.703 [2024-07-25 07:37:04.402809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:31.703 [2024-07-25 07:37:04.402828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:31.703 [2024-07-25 07:37:04.402846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.703 [2024-07-25 07:37:04.402853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:31.703 [2024-07-25 07:37:04.402861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.703 [2024-07-25 07:37:04.402882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.703 [2024-07-25 07:37:04.402890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.703 07:37:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.962 [2024-07-25 07:37:04.584838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.962 07:37:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96632 00:22:32.901 [2024-07-25 07:37:05.411685] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:41.027 00:22:41.027 Latency(us) 00:22:41.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.027 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.027 Verification LBA range: start 0x0 length 0x4000 00:22:41.027 NVMe0n1 : 10.01 8588.96 33.55 0.00 0.00 14878.85 1409.45 3047738.80 00:22:41.027 =================================================================================================================== 00:22:41.027 Total : 8588.96 33.55 0.00 0.00 14878.85 1409.45 3047738.80 00:22:41.027 0 00:22:41.027 07:37:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.027 07:37:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96754 00:22:41.027 07:37:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:41.027 Running I/O for 10 seconds... 00:22:41.027 07:37:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.027 [2024-07-25 07:37:13.485179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.027 [2024-07-25 07:37:13.485425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.485495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636310 is same with the state(5) to be set 00:22:41.028 [2024-07-25 07:37:13.487055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.028 [2024-07-25 07:37:13.487328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.028 [2024-07-25 07:37:13.487632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.028 [2024-07-25 07:37:13.487638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.487988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.487995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.488000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.488013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.488019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.488027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.488033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.488041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.029 [2024-07-25 07:37:13.488046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.029 [2024-07-25 07:37:13.488053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.030 [2024-07-25 07:37:13.488197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.030 [2024-07-25 07:37:13.488471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.030 [2024-07-25 07:37:13.488479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.031 [2024-07-25 07:37:13.488870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.031 [2024-07-25 07:37:13.488878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.032 [2024-07-25 07:37:13.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.488891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.032 [2024-07-25 07:37:13.488897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.488904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.032 [2024-07-25 07:37:13.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.488943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.488949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108912 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.488955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.488964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.488969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.488979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108920 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.488985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.488991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.488996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108928 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108936 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108944 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108952 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108960 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108968 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108976 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108984 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108992 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109000 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.489269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.489274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.489283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109008 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.489288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.032 [2024-07-25 07:37:13.494284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.032 [2024-07-25 07:37:13.494297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.032 [2024-07-25 07:37:13.494303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109016 len:8 PRP1 0x0 PRP2 0x0 00:22:41.032 [2024-07-25 07:37:13.494310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.033 [2024-07-25 07:37:13.494354] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ace6f0 was disconnected and freed. reset controller. 00:22:41.033 [2024-07-25 07:37:13.494442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.033 [2024-07-25 07:37:13.494453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.033 [2024-07-25 07:37:13.494461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.033 [2024-07-25 07:37:13.494467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.033 [2024-07-25 07:37:13.494485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.033 [2024-07-25 07:37:13.494500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.033 [2024-07-25 07:37:13.494508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.033 [2024-07-25 07:37:13.494514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.033 [2024-07-25 07:37:13.494521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:41.033 [2024-07-25 07:37:13.494725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:41.033 [2024-07-25 07:37:13.494743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:41.033 [2024-07-25 07:37:13.494817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.033 [2024-07-25 07:37:13.494835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50240 with addr=10.0.0.2, port=4420 00:22:41.033 [2024-07-25 07:37:13.494843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:41.033 [2024-07-25 07:37:13.494854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:41.033 [2024-07-25 07:37:13.494875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:41.033 [2024-07-25 07:37:13.494882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:41.033 [2024-07-25 07:37:13.494890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:41.033 [2024-07-25 07:37:13.494904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.033 [2024-07-25 07:37:13.494911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:41.033 07:37:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:41.973 [2024-07-25 07:37:14.493101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.973 [2024-07-25 07:37:14.493162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50240 with addr=10.0.0.2, port=4420 00:22:41.973 [2024-07-25 07:37:14.493173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:41.973 [2024-07-25 07:37:14.493209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:41.973 [2024-07-25 07:37:14.493222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:41.973 [2024-07-25 07:37:14.493228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:41.973 [2024-07-25 07:37:14.493236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:41.973 [2024-07-25 07:37:14.493255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.973 [2024-07-25 07:37:14.493263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.911 [2024-07-25 07:37:15.491467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.911 [2024-07-25 07:37:15.491526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50240 with addr=10.0.0.2, port=4420 00:22:42.911 [2024-07-25 07:37:15.491537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:42.911 [2024-07-25 07:37:15.491555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:42.911 [2024-07-25 07:37:15.491568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:42.911 [2024-07-25 07:37:15.491574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:42.911 [2024-07-25 07:37:15.491582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.911 [2024-07-25 07:37:15.491602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.911 [2024-07-25 07:37:15.491610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.856 [2024-07-25 07:37:16.492166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.856 [2024-07-25 07:37:16.492231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a50240 with addr=10.0.0.2, port=4420 00:22:43.856 [2024-07-25 07:37:16.492258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50240 is same with the state(5) to be set 00:22:43.856 [2024-07-25 07:37:16.492434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a50240 (9): Bad file descriptor 00:22:43.856 [2024-07-25 07:37:16.492648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:43.856 [2024-07-25 07:37:16.492662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:43.856 [2024-07-25 07:37:16.492670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:43.856 [2024-07-25 07:37:16.495375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:43.856 [2024-07-25 07:37:16.495406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.856 07:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.129 [2024-07-25 07:37:16.686826] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.129 07:37:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96754 00:22:45.068 [2024-07-25 07:37:17.528021] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.344 00:22:50.344 Latency(us) 00:22:50.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.344 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.344 Verification LBA range: start 0x0 length 0x4000 00:22:50.344 NVMe0n1 : 10.00 7482.71 29.23 5370.75 0.00 9936.95 441.80 3018433.62 00:22:50.344 =================================================================================================================== 00:22:50.344 Total : 7482.71 29.23 5370.75 0.00 9936.95 0.00 3018433.62 00:22:50.344 0 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96590 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96590 ']' 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96590 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96590 00:22:50.344 killing process with pid 96590 00:22:50.344 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.344 00:22:50.344 Latency(us) 00:22:50.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.344 =================================================================================================================== 00:22:50.344 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96590' 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96590 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96590 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:50.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96876 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96876 /var/tmp/bdevperf.sock 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96876 ']' 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.344 07:37:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.344 [2024-07-25 07:37:22.682771] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:22:50.344 [2024-07-25 07:37:22.682887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96876 ] 00:22:50.344 [2024-07-25 07:37:22.820710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.344 [2024-07-25 07:37:22.900141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.913 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.913 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:50.913 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96876 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:50.913 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96904 00:22:50.913 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:51.172 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:51.431 NVMe0n1 00:22:51.431 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:51.431 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96951 00:22:51.431 07:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:51.431 Running I/O for 10 seconds... 00:22:52.368 07:37:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.630 [2024-07-25 07:37:25.153089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.630 [2024-07-25 07:37:25.153262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639c80 is same with the state(5) to be set 00:22:52.631 [2024-07-25 07:37:25.153654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.153990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.153996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.631 [2024-07-25 07:37:25.154176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.631 [2024-07-25 07:37:25.154183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.632 [2024-07-25 07:37:25.154761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.632 [2024-07-25 07:37:25.154768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.154988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.154995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.633 [2024-07-25 07:37:25.155370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.633 [2024-07-25 07:37:25.155377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.634 [2024-07-25 07:37:25.155650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.634 [2024-07-25 07:37:25.155695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.634 [2024-07-25 07:37:25.155701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93592 len:8 PRP1 0x0 PRP2 0x0 00:22:52.634 [2024-07-25 07:37:25.155710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155781] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f208d0 was disconnected and freed. reset controller. 00:22:52.634 [2024-07-25 07:37:25.155866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.634 [2024-07-25 07:37:25.155881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.634 [2024-07-25 07:37:25.155894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.634 [2024-07-25 07:37:25.155905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.634 [2024-07-25 07:37:25.155916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.634 [2024-07-25 07:37:25.155921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb3240 is same with the state(5) to be set 00:22:52.634 [2024-07-25 07:37:25.156170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.634 [2024-07-25 07:37:25.156196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb3240 (9): Bad file descriptor 00:22:52.634 [2024-07-25 07:37:25.156284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.634 [2024-07-25 07:37:25.156299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb3240 with addr=10.0.0.2, port=4420 00:22:52.634 [2024-07-25 07:37:25.156315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb3240 is same with the state(5) to be set 00:22:52.634 [2024-07-25 07:37:25.156328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb3240 (9): Bad file descriptor 00:22:52.634 [2024-07-25 07:37:25.156339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.634 [2024-07-25 07:37:25.156345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:52.634 [2024-07-25 07:37:25.156353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.634 [2024-07-25 07:37:25.156369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.634 [2024-07-25 07:37:25.156376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.634 07:37:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96951 00:22:54.542 [2024-07-25 07:37:27.152762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.542 [2024-07-25 07:37:27.152834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb3240 with addr=10.0.0.2, port=4420 00:22:54.542 [2024-07-25 07:37:27.152845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb3240 is same with the state(5) to be set 00:22:54.542 [2024-07-25 07:37:27.152864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb3240 (9): Bad file descriptor 00:22:54.542 [2024-07-25 07:37:27.152875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.542 [2024-07-25 07:37:27.152881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.542 [2024-07-25 07:37:27.152888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.542 [2024-07-25 07:37:27.152909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.542 [2024-07-25 07:37:27.152916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.446 [2024-07-25 07:37:29.149323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.446 [2024-07-25 07:37:29.149386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb3240 with addr=10.0.0.2, port=4420 00:22:56.446 [2024-07-25 07:37:29.149397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb3240 is same with the state(5) to be set 00:22:56.446 [2024-07-25 07:37:29.149416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb3240 (9): Bad file descriptor 00:22:56.446 [2024-07-25 07:37:29.149428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:56.446 [2024-07-25 07:37:29.149434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:56.446 [2024-07-25 07:37:29.149441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.446 [2024-07-25 07:37:29.149464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.446 [2024-07-25 07:37:29.149470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.981 [2024-07-25 07:37:31.145676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.981 [2024-07-25 07:37:31.145718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.981 [2024-07-25 07:37:31.145724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:58.981 [2024-07-25 07:37:31.145731] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:58.981 [2024-07-25 07:37:31.145751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.550 00:22:59.550 Latency(us) 00:22:59.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.550 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:59.550 NVMe0n1 : 8.11 3365.67 13.15 15.77 0.00 37910.84 2919.07 7033243.39 00:22:59.550 =================================================================================================================== 00:22:59.550 Total : 3365.67 13.15 15.77 0.00 37910.84 2919.07 7033243.39 00:22:59.550 0 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.550 Attaching 5 probes... 00:22:59.550 1118.780182: reset bdev controller NVMe0 00:22:59.550 1118.842777: reconnect bdev controller NVMe0 00:22:59.550 3115.263804: reconnect delay bdev controller NVMe0 00:22:59.550 3115.284879: reconnect bdev controller NVMe0 00:22:59.550 5111.779932: reconnect delay bdev controller NVMe0 00:22:59.550 5111.800678: reconnect bdev controller NVMe0 00:22:59.550 7108.276479: reconnect delay bdev controller NVMe0 00:22:59.550 7108.294084: reconnect bdev controller NVMe0 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96904 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96876 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96876 ']' 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96876 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96876 00:22:59.550 killing process with pid 96876 00:22:59.550 Received shutdown signal, test time was about 8.193372 seconds 00:22:59.550 00:22:59.550 Latency(us) 00:22:59.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.550 =================================================================================================================== 00:22:59.550 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96876' 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96876 00:22:59.550 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96876 00:22:59.809 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.066 rmmod nvme_tcp 00:23:00.066 rmmod nvme_fabrics 00:23:00.066 rmmod nvme_keyring 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:23:00.066 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96298 ']' 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96298 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96298 ']' 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96298 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96298 00:23:00.067 killing process with pid 96298 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96298' 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96298 00:23:00.067 07:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96298 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:00.632 ************************************ 00:23:00.632 END TEST nvmf_timeout 00:23:00.632 ************************************ 00:23:00.632 00:23:00.632 real 0m45.647s 00:23:00.632 user 2m12.842s 00:23:00.632 sys 0m4.763s 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:00.632 00:23:00.632 real 5m21.876s 00:23:00.632 user 13m45.037s 00:23:00.632 sys 0m59.124s 00:23:00.632 ************************************ 00:23:00.632 END TEST nvmf_host 00:23:00.632 ************************************ 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.632 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.632 00:23:00.632 real 15m5.358s 00:23:00.632 user 39m54.159s 00:23:00.632 sys 3m4.802s 00:23:00.632 07:37:33 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.632 07:37:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.632 ************************************ 00:23:00.632 END TEST nvmf_tcp 00:23:00.632 ************************************ 00:23:00.632 07:37:33 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:23:00.632 07:37:33 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:00.632 07:37:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:00.632 07:37:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.632 07:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:00.632 ************************************ 00:23:00.632 START TEST spdkcli_nvmf_tcp 00:23:00.632 ************************************ 00:23:00.632 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:00.891 * Looking for test storage... 00:23:00.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:00.891 07:37:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:00.891 07:37:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:00.891 07:37:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=97174 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 97174 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 97174 ']' 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.892 07:37:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:23:00.892 [2024-07-25 07:37:33.514006] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:00.892 [2024-07-25 07:37:33.514074] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97174 ] 00:23:01.151 [2024-07-25 07:37:33.653488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:01.151 [2024-07-25 07:37:33.764844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.151 [2024-07-25 07:37:33.764846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.719 07:37:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:01.719 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:01.719 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:23:01.719 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:23:01.719 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:23:01.719 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:23:01.719 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:23:01.719 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:01.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:23:01.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:23:01.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:01.719 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:01.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:23:01.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:01.719 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:23:01.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:23:01.720 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:23:01.720 ' 00:23:05.043 [2024-07-25 07:37:37.100941] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.979 [2024-07-25 07:37:38.427466] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:23:08.515 [2024-07-25 07:37:40.904310] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:23:10.420 [2024-07-25 07:37:43.061529] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:23:12.324 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:23:12.324 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:23:12.324 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:23:12.324 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:23:12.324 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:23:12.324 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:23:12.324 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:23:12.324 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:12.324 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:12.324 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:23:12.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:23:12.324 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:23:12.324 07:37:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:23:12.583 07:37:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:23:12.583 07:37:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:23:12.583 07:37:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:23:12.583 07:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.583 07:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.841 07:37:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:23:12.841 07:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.841 07:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.841 07:37:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:23:12.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:23:12.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:12.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:23:12.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:23:12.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:23:12.841 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:23:12.841 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:12.841 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:23:12.841 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:23:12.841 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:23:12.841 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:23:12.841 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:23:12.841 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:23:12.841 ' 00:23:19.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:19.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:19.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:19.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:19.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:23:19.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:23:19.414 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:19.414 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:19.414 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:19.414 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:19.414 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:19.414 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:19.414 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:19.414 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 97174 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97174 ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97174 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97174 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97174' 00:23:19.414 killing process with pid 97174 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 97174 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 97174 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 97174 ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 97174 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97174 ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97174 00:23:19.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (97174) - No such process 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 97174 is not found' 00:23:19.414 Process with pid 97174 is not found 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:19.414 00:23:19.414 real 0m18.110s 00:23:19.414 user 0m39.796s 00:23:19.414 sys 0m1.088s 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:19.414 07:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.414 ************************************ 00:23:19.414 END TEST spdkcli_nvmf_tcp 00:23:19.414 ************************************ 00:23:19.414 07:37:51 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:19.414 07:37:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:19.414 07:37:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.414 07:37:51 -- common/autotest_common.sh@10 -- # set +x 00:23:19.414 ************************************ 00:23:19.414 START TEST nvmf_identify_passthru 00:23:19.414 ************************************ 00:23:19.414 07:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:19.414 * Looking for test storage... 00:23:19.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.414 07:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:19.414 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.415 07:37:51 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.415 07:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.415 07:37:51 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:19.415 07:37:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.415 07:37:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.415 07:37:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:19.415 07:37:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:19.415 Cannot find device "nvmf_tgt_br" 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.415 Cannot find device "nvmf_tgt_br2" 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:19.415 Cannot find device "nvmf_tgt_br" 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:19.415 Cannot find device "nvmf_tgt_br2" 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.415 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:19.416 07:37:51 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:19.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:23:19.416 00:23:19.416 --- 10.0.0.2 ping statistics --- 00:23:19.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.416 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:19.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:19.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:23:19.416 00:23:19.416 --- 10.0.0.3 ping statistics --- 00:23:19.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.416 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:19.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:23:19.416 00:23:19.416 --- 10.0.0.1 ping statistics --- 00:23:19.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.416 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.416 07:37:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.416 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.416 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:19.416 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:23:19.675 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:23:19.675 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:19.675 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:00:10.0 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:19.675 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:23:19.934 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:23:19.934 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.934 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.934 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97682 00:23:19.934 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:19.934 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.934 07:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97682 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97682 ']' 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.934 07:37:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.193 [2024-07-25 07:37:52.717540] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:20.193 [2024-07-25 07:37:52.717630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.193 [2024-07-25 07:37:52.856340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.452 [2024-07-25 07:37:52.969429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.452 [2024-07-25 07:37:52.969499] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.452 [2024-07-25 07:37:52.969506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.452 [2024-07-25 07:37:52.969511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.452 [2024-07-25 07:37:52.969515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.452 [2024-07-25 07:37:52.969746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.452 [2024-07-25 07:37:52.969955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.453 [2024-07-25 07:37:52.971169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.453 [2024-07-25 07:37:52.971172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:23:21.020 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.020 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.020 [2024-07-25 07:37:53.694882] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.020 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.020 [2024-07-25 07:37:53.708880] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.020 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.020 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.279 Nvme0n1 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.279 [2024-07-25 07:37:53.880130] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.279 [ 00:23:21.279 { 00:23:21.279 "allow_any_host": true, 00:23:21.279 "hosts": [], 00:23:21.279 "listen_addresses": [], 00:23:21.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:21.279 "subtype": "Discovery" 00:23:21.279 }, 00:23:21.279 { 00:23:21.279 "allow_any_host": true, 00:23:21.279 "hosts": [], 00:23:21.279 "listen_addresses": [ 00:23:21.279 { 00:23:21.279 "adrfam": "IPv4", 00:23:21.279 "traddr": "10.0.0.2", 00:23:21.279 "trsvcid": "4420", 00:23:21.279 "trtype": "TCP" 00:23:21.279 } 00:23:21.279 ], 00:23:21.279 "max_cntlid": 65519, 00:23:21.279 "max_namespaces": 1, 00:23:21.279 "min_cntlid": 1, 00:23:21.279 "model_number": "SPDK bdev Controller", 00:23:21.279 "namespaces": [ 00:23:21.279 { 00:23:21.279 "bdev_name": "Nvme0n1", 00:23:21.279 "name": "Nvme0n1", 00:23:21.279 "nguid": "D7068A1386A846F1AE51E535BB1DD944", 00:23:21.279 "nsid": 1, 00:23:21.279 "uuid": "d7068a13-86a8-46f1-ae51-e535bb1dd944" 00:23:21.279 } 00:23:21.279 ], 00:23:21.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.279 "serial_number": "SPDK00000000000001", 00:23:21.279 "subtype": "NVMe" 00:23:21.279 } 00:23:21.279 ] 00:23:21.279 07:37:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:21.279 07:37:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:21.539 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:23:21.539 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:21.539 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:21.539 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:21.798 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:23:21.798 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:23:21.798 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:23:21.798 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.798 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:21.798 07:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.798 rmmod nvme_tcp 00:23:21.798 rmmod nvme_fabrics 00:23:21.798 rmmod nvme_keyring 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97682 ']' 00:23:21.798 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97682 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97682 ']' 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97682 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97682 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.798 killing process with pid 97682 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97682' 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97682 00:23:21.798 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97682 00:23:22.058 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.058 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.058 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.058 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.058 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.058 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.058 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:22.058 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.318 07:37:54 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:22.318 00:23:22.318 real 0m3.319s 00:23:22.318 user 0m7.461s 00:23:22.318 sys 0m1.096s 00:23:22.318 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.318 07:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:22.318 ************************************ 00:23:22.318 END TEST nvmf_identify_passthru 00:23:22.318 ************************************ 00:23:22.318 07:37:54 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:22.318 07:37:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:22.318 07:37:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.318 07:37:54 -- common/autotest_common.sh@10 -- # set +x 00:23:22.318 ************************************ 00:23:22.318 START TEST nvmf_dif 00:23:22.318 ************************************ 00:23:22.318 07:37:54 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:22.318 * Looking for test storage... 00:23:22.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:22.318 07:37:55 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:22.318 07:37:55 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.318 07:37:55 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.318 07:37:55 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.318 07:37:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.318 07:37:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.318 07:37:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.318 07:37:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:22.318 07:37:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.318 07:37:55 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.578 07:37:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:22.578 07:37:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:22.578 07:37:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:22.578 07:37:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:22.578 07:37:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.578 07:37:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:22.578 07:37:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:22.578 Cannot find device "nvmf_tgt_br" 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@155 -- # true 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.578 Cannot find device "nvmf_tgt_br2" 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@156 -- # true 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:22.578 Cannot find device "nvmf_tgt_br" 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@158 -- # true 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:22.578 Cannot find device "nvmf_tgt_br2" 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@159 -- # true 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:22.578 07:37:55 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:22.837 07:37:55 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:22.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:23:22.838 00:23:22.838 --- 10.0.0.2 ping statistics --- 00:23:22.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.838 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:22.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:22.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:23:22.838 00:23:22.838 --- 10.0.0.3 ping statistics --- 00:23:22.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.838 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:22.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:23:22.838 00:23:22.838 --- 10.0.0.1 ping statistics --- 00:23:22.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.838 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:22.838 07:37:55 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:23.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:23.407 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.407 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.407 07:37:55 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.407 07:37:55 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.407 07:37:55 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.407 07:37:55 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.407 07:37:55 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.407 07:37:55 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.407 07:37:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:23.407 07:37:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:23.407 07:37:56 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:23.407 07:37:56 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=98030 00:23:23.407 07:37:56 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:23.407 07:37:56 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 98030 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 98030 ']' 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.407 07:37:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:23.407 [2024-07-25 07:37:56.070382] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:23:23.407 [2024-07-25 07:37:56.070438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.666 [2024-07-25 07:37:56.210317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.666 [2024-07-25 07:37:56.318392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.666 [2024-07-25 07:37:56.318455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.666 [2024-07-25 07:37:56.318461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.666 [2024-07-25 07:37:56.318467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.666 [2024-07-25 07:37:56.318471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.666 [2024-07-25 07:37:56.318512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.235 07:37:56 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.235 07:37:56 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:23:24.235 07:37:56 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.235 07:37:56 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.235 07:37:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.235 07:37:56 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.235 07:37:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:24.235 07:37:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:24.235 07:37:56 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.235 07:37:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.495 [2024-07-25 07:37:56.970280] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.495 07:37:56 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.495 07:37:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:24.495 07:37:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:24.495 07:37:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.495 07:37:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.495 ************************************ 00:23:24.495 START TEST fio_dif_1_default 00:23:24.495 ************************************ 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.495 07:37:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.495 bdev_null0 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.495 [2024-07-25 07:37:57.034287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.495 { 00:23:24.495 "params": { 00:23:24.495 "name": "Nvme$subsystem", 00:23:24.495 "trtype": "$TEST_TRANSPORT", 00:23:24.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.495 "adrfam": "ipv4", 00:23:24.495 "trsvcid": "$NVMF_PORT", 00:23:24.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.495 "hdgst": ${hdgst:-false}, 00:23:24.495 "ddgst": ${ddgst:-false} 00:23:24.495 }, 00:23:24.495 "method": "bdev_nvme_attach_controller" 00:23:24.495 } 00:23:24.495 EOF 00:23:24.495 )") 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:24.495 "params": { 00:23:24.495 "name": "Nvme0", 00:23:24.495 "trtype": "tcp", 00:23:24.495 "traddr": "10.0.0.2", 00:23:24.495 "adrfam": "ipv4", 00:23:24.495 "trsvcid": "4420", 00:23:24.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:24.495 "hdgst": false, 00:23:24.495 "ddgst": false 00:23:24.495 }, 00:23:24.495 "method": "bdev_nvme_attach_controller" 00:23:24.495 }' 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:24.495 07:37:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.755 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:24.755 fio-3.35 00:23:24.755 Starting 1 thread 00:23:36.968 00:23:36.968 filename0: (groupid=0, jobs=1): err= 0: pid=98115: Thu Jul 25 07:38:07 2024 00:23:36.968 read: IOPS=1769, BW=7077KiB/s (7247kB/s)(69.3MiB/10031msec) 00:23:36.968 slat (nsec): min=5166, max=58122, avg=5582.84, stdev=1423.50 00:23:36.968 clat (usec): min=275, max=42313, avg=2245.27, stdev=8634.83 00:23:36.968 lat (usec): min=280, max=42319, avg=2250.85, stdev=8634.83 00:23:36.968 clat percentiles (usec): 00:23:36.968 | 1.00th=[ 285], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:23:36.968 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:23:36.968 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 519], 00:23:36.968 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:23:36.968 | 99.99th=[42206] 00:23:36.968 bw ( KiB/s): min= 5088, max=11456, per=100.00%, avg=7097.05, stdev=1970.06, samples=20 00:23:36.968 iops : min= 1272, max= 2864, avg=1774.25, stdev=492.52, samples=20 00:23:36.968 lat (usec) : 500=94.98%, 750=0.20% 00:23:36.968 lat (msec) : 4=0.02%, 50=4.80% 00:23:36.968 cpu : usr=93.90%, sys=5.45%, ctx=26, majf=0, minf=9 00:23:36.968 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:36.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.968 issued rwts: total=17748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.968 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:36.968 00:23:36.968 Run status group 0 (all jobs): 00:23:36.968 READ: bw=7077KiB/s (7247kB/s), 7077KiB/s-7077KiB/s (7247kB/s-7247kB/s), io=69.3MiB (72.7MB), run=10031-10031msec 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:36.968 ************************************ 00:23:36.968 END TEST fio_dif_1_default 00:23:36.968 ************************************ 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.968 00:23:36.968 real 0m11.042s 00:23:36.968 user 0m10.104s 00:23:36.968 sys 0m0.835s 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:36.968 07:38:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:36.968 07:38:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:36.968 07:38:08 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:36.968 07:38:08 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.968 07:38:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:36.968 ************************************ 00:23:36.969 START TEST fio_dif_1_multi_subsystems 00:23:36.969 ************************************ 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 bdev_null0 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 [2024-07-25 07:38:08.149262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 bdev_null1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.969 { 00:23:36.969 "params": { 00:23:36.969 "name": "Nvme$subsystem", 00:23:36.969 "trtype": "$TEST_TRANSPORT", 00:23:36.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.969 "adrfam": "ipv4", 00:23:36.969 "trsvcid": "$NVMF_PORT", 00:23:36.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.969 "hdgst": ${hdgst:-false}, 00:23:36.969 "ddgst": ${ddgst:-false} 00:23:36.969 }, 00:23:36.969 "method": "bdev_nvme_attach_controller" 00:23:36.969 } 00:23:36.969 EOF 00:23:36.969 )") 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.969 { 00:23:36.969 "params": { 00:23:36.969 "name": "Nvme$subsystem", 00:23:36.969 "trtype": "$TEST_TRANSPORT", 00:23:36.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.969 "adrfam": "ipv4", 00:23:36.969 "trsvcid": "$NVMF_PORT", 00:23:36.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.969 "hdgst": ${hdgst:-false}, 00:23:36.969 "ddgst": ${ddgst:-false} 00:23:36.969 }, 00:23:36.969 "method": "bdev_nvme_attach_controller" 00:23:36.969 } 00:23:36.969 EOF 00:23:36.969 )") 00:23:36.969 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:36.970 "params": { 00:23:36.970 "name": "Nvme0", 00:23:36.970 "trtype": "tcp", 00:23:36.970 "traddr": "10.0.0.2", 00:23:36.970 "adrfam": "ipv4", 00:23:36.970 "trsvcid": "4420", 00:23:36.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:36.970 "hdgst": false, 00:23:36.970 "ddgst": false 00:23:36.970 }, 00:23:36.970 "method": "bdev_nvme_attach_controller" 00:23:36.970 },{ 00:23:36.970 "params": { 00:23:36.970 "name": "Nvme1", 00:23:36.970 "trtype": "tcp", 00:23:36.970 "traddr": "10.0.0.2", 00:23:36.970 "adrfam": "ipv4", 00:23:36.970 "trsvcid": "4420", 00:23:36.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.970 "hdgst": false, 00:23:36.970 "ddgst": false 00:23:36.970 }, 00:23:36.970 "method": "bdev_nvme_attach_controller" 00:23:36.970 }' 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:36.970 07:38:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.970 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:36.970 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:36.970 fio-3.35 00:23:36.970 Starting 2 threads 00:23:46.947 00:23:46.947 filename0: (groupid=0, jobs=1): err= 0: pid=98283: Thu Jul 25 07:38:19 2024 00:23:46.947 read: IOPS=244, BW=979KiB/s (1002kB/s)(9792KiB/10006msec) 00:23:46.947 slat (nsec): min=5209, max=69841, avg=8786.49, stdev=6859.53 00:23:46.947 clat (usec): min=289, max=42331, avg=16322.84, stdev=19805.80 00:23:46.947 lat (usec): min=295, max=42337, avg=16331.63, stdev=19804.69 00:23:46.947 clat percentiles (usec): 00:23:46.947 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:23:46.947 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 611], 00:23:46.947 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:46.947 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:23:46.947 | 99.99th=[42206] 00:23:46.947 bw ( KiB/s): min= 672, max= 2112, per=26.60%, avg=973.47, stdev=317.02, samples=19 00:23:46.947 iops : min= 168, max= 528, avg=243.37, stdev=79.26, samples=19 00:23:46.947 lat (usec) : 500=52.70%, 750=7.68%, 1000=0.08% 00:23:46.947 lat (msec) : 4=0.16%, 50=39.38% 00:23:46.947 cpu : usr=97.94%, sys=1.68%, ctx=10, majf=0, minf=0 00:23:46.947 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.947 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.947 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:46.947 filename1: (groupid=0, jobs=1): err= 0: pid=98284: Thu Jul 25 07:38:19 2024 00:23:46.947 read: IOPS=670, BW=2682KiB/s (2747kB/s)(26.3MiB/10039msec) 00:23:46.947 slat (nsec): min=3221, max=69347, avg=8056.52, stdev=5378.20 00:23:46.947 clat (usec): min=325, max=42379, avg=5940.95, stdev=13944.97 00:23:46.947 lat (usec): min=331, max=42386, avg=5949.00, stdev=13944.83 00:23:46.947 clat percentiles (usec): 00:23:46.947 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:23:46.947 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 375], 00:23:46.947 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[40633], 95.00th=[41157], 00:23:46.947 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:23:46.947 | 99.99th=[42206] 00:23:46.947 bw ( KiB/s): min= 1856, max= 4512, per=73.57%, avg=2691.20, stdev=605.10, samples=20 00:23:46.947 iops : min= 464, max= 1128, avg=672.80, stdev=151.28, samples=20 00:23:46.947 lat (usec) : 500=83.10%, 750=3.07%, 1000=0.04% 00:23:46.947 lat (msec) : 10=0.06%, 50=13.73% 00:23:46.947 cpu : usr=98.55%, sys=0.88%, ctx=58, majf=0, minf=9 00:23:46.947 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.947 issued rwts: total=6732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.947 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:46.947 00:23:46.947 Run status group 0 (all jobs): 00:23:46.947 READ: bw=3658KiB/s (3746kB/s), 979KiB/s-2682KiB/s (1002kB/s-2747kB/s), io=35.9MiB (37.6MB), run=10006-10039msec 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.947 00:23:46.947 real 0m11.206s 00:23:46.947 user 0m20.553s 00:23:46.947 sys 0m0.543s 00:23:46.947 ************************************ 00:23:46.947 END TEST fio_dif_1_multi_subsystems 00:23:46.947 ************************************ 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.947 07:38:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.947 07:38:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:46.947 07:38:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:46.947 07:38:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.947 07:38:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.947 ************************************ 00:23:46.947 START TEST fio_dif_rand_params 00:23:46.947 ************************************ 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:46.947 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.948 bdev_null0 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.948 [2024-07-25 07:38:19.425004] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.948 { 00:23:46.948 "params": { 00:23:46.948 "name": "Nvme$subsystem", 00:23:46.948 "trtype": "$TEST_TRANSPORT", 00:23:46.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.948 "adrfam": "ipv4", 00:23:46.948 "trsvcid": "$NVMF_PORT", 00:23:46.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.948 "hdgst": ${hdgst:-false}, 00:23:46.948 "ddgst": ${ddgst:-false} 00:23:46.948 }, 00:23:46.948 "method": "bdev_nvme_attach_controller" 00:23:46.948 } 00:23:46.948 EOF 00:23:46.948 )") 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:46.948 "params": { 00:23:46.948 "name": "Nvme0", 00:23:46.948 "trtype": "tcp", 00:23:46.948 "traddr": "10.0.0.2", 00:23:46.948 "adrfam": "ipv4", 00:23:46.948 "trsvcid": "4420", 00:23:46.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.948 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:46.948 "hdgst": false, 00:23:46.948 "ddgst": false 00:23:46.948 }, 00:23:46.948 "method": "bdev_nvme_attach_controller" 00:23:46.948 }' 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:46.948 07:38:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.948 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:46.948 ... 00:23:46.948 fio-3.35 00:23:46.948 Starting 3 threads 00:23:53.520 00:23:53.520 filename0: (groupid=0, jobs=1): err= 0: pid=98440: Thu Jul 25 07:38:25 2024 00:23:53.520 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(175MiB/5006msec) 00:23:53.520 slat (nsec): min=5634, max=48530, avg=12124.31, stdev=6802.95 00:23:53.520 clat (usec): min=4000, max=50984, avg=10713.52, stdev=11656.35 00:23:53.520 lat (usec): min=4007, max=50989, avg=10725.64, stdev=11656.08 00:23:53.520 clat percentiles (usec): 00:23:53.520 | 1.00th=[ 4490], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 5735], 00:23:53.520 | 30.00th=[ 6259], 40.00th=[ 7242], 50.00th=[ 7635], 60.00th=[ 7963], 00:23:53.520 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[47973], 00:23:53.520 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[51119], 00:23:53.520 | 99.99th=[51119] 00:23:53.520 bw ( KiB/s): min=25600, max=44800, per=27.70%, avg=34104.89, stdev=7240.52, samples=9 00:23:53.520 iops : min= 200, max= 350, avg=266.44, stdev=56.57, samples=9 00:23:53.520 lat (msec) : 10=91.21%, 50=8.36%, 100=0.43% 00:23:53.520 cpu : usr=96.54%, sys=2.50%, ctx=7, majf=0, minf=0 00:23:53.520 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.520 issued rwts: total=1399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.520 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:53.520 filename0: (groupid=0, jobs=1): err= 0: pid=98441: Thu Jul 25 07:38:25 2024 00:23:53.520 read: IOPS=387, BW=48.5MiB/s (50.8MB/s)(243MiB/5004msec) 00:23:53.520 slat (nsec): min=5679, max=64794, avg=13912.02, stdev=10838.88 00:23:53.520 clat (usec): min=2950, max=50977, avg=7700.21, stdev=3658.29 00:23:53.520 lat (usec): min=2958, max=51006, avg=7714.12, stdev=3660.28 00:23:53.520 clat percentiles (usec): 00:23:53.520 | 1.00th=[ 3294], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3785], 00:23:53.520 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7570], 00:23:53.520 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11469], 95.00th=[11731], 00:23:53.520 | 99.00th=[12387], 99.50th=[12649], 99.90th=[51119], 99.95th=[51119], 00:23:53.520 | 99.99th=[51119] 00:23:53.520 bw ( KiB/s): min=37632, max=59904, per=40.28%, avg=49588.67, stdev=6889.68, samples=9 00:23:53.520 iops : min= 294, max= 468, avg=387.33, stdev=53.88, samples=9 00:23:53.520 lat (msec) : 4=20.20%, 10=50.13%, 20=29.37%, 50=0.15%, 100=0.15% 00:23:53.520 cpu : usr=93.06%, sys=4.92%, ctx=75, majf=0, minf=0 00:23:53.520 IO depths : 1=30.1%, 2=69.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.520 issued rwts: total=1941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.520 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:53.520 filename0: (groupid=0, jobs=1): err= 0: pid=98442: Thu Jul 25 07:38:25 2024 00:23:53.520 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(184MiB/5004msec) 00:23:53.520 slat (nsec): min=5810, max=70700, avg=12202.17, stdev=6542.91 00:23:53.520 clat (usec): min=2774, max=51543, avg=10159.87, stdev=10208.52 00:23:53.520 lat (usec): min=2780, max=51549, avg=10172.07, stdev=10208.54 00:23:53.520 clat percentiles (usec): 00:23:53.520 | 1.00th=[ 4293], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 5735], 00:23:53.520 | 30.00th=[ 5997], 40.00th=[ 6849], 50.00th=[ 8160], 60.00th=[ 8717], 00:23:53.520 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[46400], 00:23:53.520 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:23:53.520 | 99.99th=[51643] 00:23:53.520 bw ( KiB/s): min=31744, max=46848, per=32.14%, avg=39566.22, stdev=4480.20, samples=9 00:23:53.520 iops : min= 248, max= 366, avg=309.11, stdev=35.00, samples=9 00:23:53.520 lat (msec) : 4=0.68%, 10=88.14%, 20=4.68%, 50=5.42%, 100=1.08% 00:23:53.520 cpu : usr=95.98%, sys=2.68%, ctx=7, majf=0, minf=0 00:23:53.520 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.520 issued rwts: total=1475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.520 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:53.520 00:23:53.520 Run status group 0 (all jobs): 00:23:53.520 READ: bw=120MiB/s (126MB/s), 34.9MiB/s-48.5MiB/s (36.6MB/s-50.8MB/s), io=602MiB (631MB), run=5004-5006msec 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.520 bdev_null0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.520 [2024-07-25 07:38:25.448239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.520 bdev_null1 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:53.520 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.521 bdev_null2 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.521 { 00:23:53.521 "params": { 00:23:53.521 "name": "Nvme$subsystem", 00:23:53.521 "trtype": "$TEST_TRANSPORT", 00:23:53.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.521 "adrfam": "ipv4", 00:23:53.521 "trsvcid": "$NVMF_PORT", 00:23:53.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.521 "hdgst": ${hdgst:-false}, 00:23:53.521 "ddgst": ${ddgst:-false} 00:23:53.521 }, 00:23:53.521 "method": "bdev_nvme_attach_controller" 00:23:53.521 } 00:23:53.521 EOF 00:23:53.521 )") 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.521 { 00:23:53.521 "params": { 00:23:53.521 "name": "Nvme$subsystem", 00:23:53.521 "trtype": "$TEST_TRANSPORT", 00:23:53.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.521 "adrfam": "ipv4", 00:23:53.521 "trsvcid": "$NVMF_PORT", 00:23:53.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.521 "hdgst": ${hdgst:-false}, 00:23:53.521 "ddgst": ${ddgst:-false} 00:23:53.521 }, 00:23:53.521 "method": "bdev_nvme_attach_controller" 00:23:53.521 } 00:23:53.521 EOF 00:23:53.521 )") 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.521 { 00:23:53.521 "params": { 00:23:53.521 "name": "Nvme$subsystem", 00:23:53.521 "trtype": "$TEST_TRANSPORT", 00:23:53.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.521 "adrfam": "ipv4", 00:23:53.521 "trsvcid": "$NVMF_PORT", 00:23:53.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.521 "hdgst": ${hdgst:-false}, 00:23:53.521 "ddgst": ${ddgst:-false} 00:23:53.521 }, 00:23:53.521 "method": "bdev_nvme_attach_controller" 00:23:53.521 } 00:23:53.521 EOF 00:23:53.521 )") 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:53.521 "params": { 00:23:53.521 "name": "Nvme0", 00:23:53.521 "trtype": "tcp", 00:23:53.521 "traddr": "10.0.0.2", 00:23:53.521 "adrfam": "ipv4", 00:23:53.521 "trsvcid": "4420", 00:23:53.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:53.521 "hdgst": false, 00:23:53.521 "ddgst": false 00:23:53.521 }, 00:23:53.521 "method": "bdev_nvme_attach_controller" 00:23:53.521 },{ 00:23:53.521 "params": { 00:23:53.521 "name": "Nvme1", 00:23:53.521 "trtype": "tcp", 00:23:53.521 "traddr": "10.0.0.2", 00:23:53.521 "adrfam": "ipv4", 00:23:53.521 "trsvcid": "4420", 00:23:53.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.521 "hdgst": false, 00:23:53.521 "ddgst": false 00:23:53.521 }, 00:23:53.521 "method": "bdev_nvme_attach_controller" 00:23:53.521 },{ 00:23:53.521 "params": { 00:23:53.521 "name": "Nvme2", 00:23:53.521 "trtype": "tcp", 00:23:53.521 "traddr": "10.0.0.2", 00:23:53.521 "adrfam": "ipv4", 00:23:53.521 "trsvcid": "4420", 00:23:53.521 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.521 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.521 "hdgst": false, 00:23:53.521 "ddgst": false 00:23:53.521 }, 00:23:53.521 "method": "bdev_nvme_attach_controller" 00:23:53.521 }' 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:53.521 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:53.522 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:53.522 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:53.522 07:38:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.522 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:53.522 ... 00:23:53.522 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:53.522 ... 00:23:53.522 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:53.522 ... 00:23:53.522 fio-3.35 00:23:53.522 Starting 24 threads 00:24:05.762 00:24:05.762 filename0: (groupid=0, jobs=1): err= 0: pid=98545: Thu Jul 25 07:38:36 2024 00:24:05.762 read: IOPS=252, BW=1011KiB/s (1035kB/s)(9.89MiB/10018msec) 00:24:05.762 slat (usec): min=2, max=8037, avg=21.94, stdev=236.36 00:24:05.762 clat (msec): min=24, max=130, avg=63.17, stdev=17.36 00:24:05.762 lat (msec): min=24, max=130, avg=63.19, stdev=17.37 00:24:05.762 clat percentiles (msec): 00:24:05.762 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:24:05.762 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 67], 00:24:05.762 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 95], 00:24:05.762 | 99.00th=[ 107], 99.50th=[ 113], 99.90th=[ 131], 99.95th=[ 131], 00:24:05.762 | 99.99th=[ 131] 00:24:05.762 bw ( KiB/s): min= 728, max= 1304, per=3.88%, avg=1011.79, stdev=157.09, samples=19 00:24:05.763 iops : min= 182, max= 326, avg=252.95, stdev=39.27, samples=19 00:24:05.763 lat (msec) : 50=28.33%, 100=69.46%, 250=2.21% 00:24:05.763 cpu : usr=38.00%, sys=0.23%, ctx=1108, majf=0, minf=9 00:24:05.763 IO depths : 1=2.2%, 2=5.1%, 4=14.5%, 8=67.4%, 16=10.8%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 issued rwts: total=2531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.763 filename0: (groupid=0, jobs=1): err= 0: pid=98546: Thu Jul 25 07:38:36 2024 00:24:05.763 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.1MiB/10034msec) 00:24:05.763 slat (usec): min=3, max=8053, avg=24.80, stdev=314.89 00:24:05.763 clat (msec): min=21, max=129, avg=61.71, stdev=19.23 00:24:05.763 lat (msec): min=21, max=129, avg=61.74, stdev=19.23 00:24:05.763 clat percentiles (msec): 00:24:05.763 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:24:05.763 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 69], 00:24:05.763 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 95], 00:24:05.763 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 130], 99.95th=[ 130], 00:24:05.763 | 99.99th=[ 130] 00:24:05.763 bw ( KiB/s): min= 768, max= 1448, per=3.96%, avg=1031.25, stdev=203.95, samples=20 00:24:05.763 iops : min= 192, max= 362, avg=257.80, stdev=50.99, samples=20 00:24:05.763 lat (msec) : 50=35.57%, 100=60.81%, 250=3.62% 00:24:05.763 cpu : usr=32.82%, sys=0.26%, ctx=898, majf=0, minf=9 00:24:05.763 IO depths : 1=2.1%, 2=4.9%, 4=14.4%, 8=67.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=91.2%, 8=3.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.763 filename0: (groupid=0, jobs=1): err= 0: pid=98547: Thu Jul 25 07:38:36 2024 00:24:05.763 read: IOPS=266, BW=1064KiB/s (1090kB/s)(10.4MiB/10038msec) 00:24:05.763 slat (usec): min=3, max=3536, avg=14.25, stdev=84.91 00:24:05.763 clat (msec): min=22, max=127, avg=60.02, stdev=19.13 00:24:05.763 lat (msec): min=22, max=127, avg=60.04, stdev=19.14 00:24:05.763 clat percentiles (msec): 00:24:05.763 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 46], 00:24:05.763 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 62], 00:24:05.763 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 96], 00:24:05.763 | 99.00th=[ 108], 99.50th=[ 118], 99.90th=[ 128], 99.95th=[ 128], 00:24:05.763 | 99.99th=[ 129] 00:24:05.763 bw ( KiB/s): min= 768, max= 1410, per=4.08%, avg=1062.10, stdev=192.29, samples=20 00:24:05.763 iops : min= 192, max= 352, avg=265.50, stdev=48.03, samples=20 00:24:05.763 lat (msec) : 50=38.49%, 100=57.84%, 250=3.67% 00:24:05.763 cpu : usr=37.68%, sys=0.26%, ctx=1214, majf=0, minf=9 00:24:05.763 IO depths : 1=1.0%, 2=2.5%, 4=9.1%, 8=74.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=90.2%, 8=5.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 issued rwts: total=2671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.763 filename0: (groupid=0, jobs=1): err= 0: pid=98548: Thu Jul 25 07:38:36 2024 00:24:05.763 read: IOPS=256, BW=1026KiB/s (1050kB/s)(10.0MiB/10014msec) 00:24:05.763 slat (usec): min=2, max=8048, avg=19.87, stdev=237.67 00:24:05.763 clat (msec): min=23, max=132, avg=62.24, stdev=18.03 00:24:05.763 lat (msec): min=23, max=132, avg=62.26, stdev=18.03 00:24:05.763 clat percentiles (msec): 00:24:05.763 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:24:05.763 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 64], 00:24:05.763 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:24:05.763 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 132], 99.95th=[ 132], 00:24:05.763 | 99.99th=[ 132] 00:24:05.763 bw ( KiB/s): min= 808, max= 1384, per=3.96%, avg=1031.68, stdev=145.16, samples=19 00:24:05.763 iops : min= 202, max= 346, avg=257.89, stdev=36.28, samples=19 00:24:05.763 lat (msec) : 50=33.49%, 100=63.08%, 250=3.43% 00:24:05.763 cpu : usr=32.75%, sys=0.31%, ctx=882, majf=0, minf=9 00:24:05.763 IO depths : 1=1.5%, 2=4.0%, 4=13.8%, 8=69.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.763 filename0: (groupid=0, jobs=1): err= 0: pid=98549: Thu Jul 25 07:38:36 2024 00:24:05.763 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.1MiB/10054msec) 00:24:05.763 slat (usec): min=3, max=8043, avg=17.28, stdev=223.49 00:24:05.763 clat (msec): min=28, max=126, avg=62.12, stdev=16.52 00:24:05.763 lat (msec): min=28, max=126, avg=62.14, stdev=16.51 00:24:05.763 clat percentiles (msec): 00:24:05.763 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:24:05.763 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 66], 00:24:05.763 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 84], 95.00th=[ 94], 00:24:05.763 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 127], 99.95th=[ 127], 00:24:05.763 | 99.99th=[ 127] 00:24:05.763 bw ( KiB/s): min= 848, max= 1240, per=3.94%, avg=1027.00, stdev=130.67, samples=19 00:24:05.763 iops : min= 212, max= 310, avg=256.74, stdev=32.68, samples=19 00:24:05.763 lat (msec) : 50=33.80%, 100=64.81%, 250=1.40% 00:24:05.763 cpu : usr=36.38%, sys=0.32%, ctx=1152, majf=0, minf=9 00:24:05.763 IO depths : 1=2.3%, 2=5.3%, 4=14.4%, 8=66.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=91.3%, 8=3.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.763 filename0: (groupid=0, jobs=1): err= 0: pid=98550: Thu Jul 25 07:38:36 2024 00:24:05.763 read: IOPS=308, BW=1233KiB/s (1263kB/s)(12.1MiB/10032msec) 00:24:05.763 slat (usec): min=5, max=8037, avg=19.89, stdev=217.83 00:24:05.763 clat (usec): min=701, max=127003, avg=51716.76, stdev=20625.98 00:24:05.763 lat (usec): min=711, max=127008, avg=51736.65, stdev=20626.65 00:24:05.763 clat percentiles (usec): 00:24:05.763 | 1.00th=[ 1270], 5.00th=[ 6915], 10.00th=[ 30016], 20.00th=[ 35390], 00:24:05.763 | 30.00th=[ 42730], 40.00th=[ 47449], 50.00th=[ 49546], 60.00th=[ 55837], 00:24:05.763 | 70.00th=[ 63701], 80.00th=[ 70779], 90.00th=[ 76022], 95.00th=[ 84411], 00:24:05.763 | 99.00th=[ 95945], 99.50th=[ 96994], 99.90th=[127402], 99.95th=[127402], 00:24:05.763 | 99.99th=[127402] 00:24:05.763 bw ( KiB/s): min= 768, max= 2816, per=4.72%, avg=1230.30, stdev=402.82, samples=20 00:24:05.763 iops : min= 192, max= 704, avg=307.55, stdev=100.71, samples=20 00:24:05.763 lat (usec) : 750=0.06%, 1000=0.06% 00:24:05.763 lat (msec) : 2=2.46%, 4=0.52%, 10=2.59%, 20=0.52%, 50=45.85% 00:24:05.763 lat (msec) : 100=47.49%, 250=0.45% 00:24:05.763 cpu : usr=48.25%, sys=0.35%, ctx=1169, majf=0, minf=0 00:24:05.763 IO depths : 1=2.4%, 2=5.1%, 4=14.5%, 8=67.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 issued rwts: total=3093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.763 filename0: (groupid=0, jobs=1): err= 0: pid=98551: Thu Jul 25 07:38:36 2024 00:24:05.763 read: IOPS=263, BW=1056KiB/s (1081kB/s)(10.3MiB/10017msec) 00:24:05.763 slat (nsec): min=4657, max=75167, avg=13049.27, stdev=11211.31 00:24:05.763 clat (msec): min=21, max=125, avg=60.52, stdev=16.52 00:24:05.763 lat (msec): min=21, max=125, avg=60.53, stdev=16.52 00:24:05.763 clat percentiles (msec): 00:24:05.763 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 47], 00:24:05.763 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 65], 00:24:05.763 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 83], 95.00th=[ 91], 00:24:05.763 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 110], 99.95th=[ 110], 00:24:05.763 | 99.99th=[ 126] 00:24:05.763 bw ( KiB/s): min= 768, max= 1328, per=4.07%, avg=1059.37, stdev=150.15, samples=19 00:24:05.763 iops : min= 192, max= 332, avg=264.84, stdev=37.54, samples=19 00:24:05.763 lat (msec) : 50=33.36%, 100=64.45%, 250=2.19% 00:24:05.763 cpu : usr=44.18%, sys=0.32%, ctx=1284, majf=0, minf=9 00:24:05.763 IO depths : 1=2.5%, 2=5.7%, 4=14.9%, 8=66.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=91.6%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 issued rwts: total=2644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.763 filename0: (groupid=0, jobs=1): err= 0: pid=98552: Thu Jul 25 07:38:36 2024 00:24:05.763 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10014msec) 00:24:05.763 slat (usec): min=5, max=8038, avg=20.18, stdev=266.19 00:24:05.763 clat (msec): min=10, max=128, avg=56.77, stdev=19.10 00:24:05.763 lat (msec): min=10, max=128, avg=56.79, stdev=19.10 00:24:05.763 clat percentiles (msec): 00:24:05.763 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 38], 00:24:05.763 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 61], 00:24:05.763 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:24:05.763 | 99.00th=[ 114], 99.50th=[ 118], 99.90th=[ 129], 99.95th=[ 129], 00:24:05.763 | 99.99th=[ 129] 00:24:05.763 bw ( KiB/s): min= 816, max= 1424, per=4.30%, avg=1119.60, stdev=190.32, samples=20 00:24:05.763 iops : min= 204, max= 356, avg=279.90, stdev=47.58, samples=20 00:24:05.763 lat (msec) : 20=1.07%, 50=42.86%, 100=54.05%, 250=2.02% 00:24:05.763 cpu : usr=32.75%, sys=0.29%, ctx=905, majf=0, minf=9 00:24:05.763 IO depths : 1=1.0%, 2=2.4%, 4=9.7%, 8=74.0%, 16=12.9%, 32=0.0%, >=64=0.0% 00:24:05.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.763 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98553: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=287, BW=1151KiB/s (1179kB/s)(11.3MiB/10038msec) 00:24:05.764 slat (usec): min=5, max=8017, avg=18.44, stdev=197.38 00:24:05.764 clat (msec): min=18, max=120, avg=55.42, stdev=18.25 00:24:05.764 lat (msec): min=18, max=120, avg=55.44, stdev=18.25 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 40], 00:24:05.764 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:24:05.764 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 88], 00:24:05.764 | 99.00th=[ 106], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:24:05.764 | 99.99th=[ 121] 00:24:05.764 bw ( KiB/s): min= 768, max= 1632, per=4.41%, avg=1149.30, stdev=234.23, samples=20 00:24:05.764 iops : min= 192, max= 408, avg=287.30, stdev=58.55, samples=20 00:24:05.764 lat (msec) : 20=0.35%, 50=46.35%, 100=50.64%, 250=2.67% 00:24:05.764 cpu : usr=40.16%, sys=0.30%, ctx=1088, majf=0, minf=9 00:24:05.764 IO depths : 1=1.6%, 2=3.7%, 4=12.3%, 8=70.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:24:05.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98554: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=263, BW=1056KiB/s (1081kB/s)(10.3MiB/10016msec) 00:24:05.764 slat (usec): min=2, max=12022, avg=18.71, stdev=280.84 00:24:05.764 clat (msec): min=20, max=120, avg=60.45, stdev=17.61 00:24:05.764 lat (msec): min=20, max=120, avg=60.46, stdev=17.61 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 48], 00:24:05.764 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 60], 60.00th=[ 63], 00:24:05.764 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 95], 00:24:05.764 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 121], 99.95th=[ 121], 00:24:05.764 | 99.99th=[ 121] 00:24:05.764 bw ( KiB/s): min= 768, max= 1280, per=4.06%, avg=1056.74, stdev=153.77, samples=19 00:24:05.764 iops : min= 192, max= 320, avg=264.16, stdev=38.47, samples=19 00:24:05.764 lat (msec) : 50=39.86%, 100=57.15%, 250=2.99% 00:24:05.764 cpu : usr=32.74%, sys=0.28%, ctx=974, majf=0, minf=9 00:24:05.764 IO depths : 1=1.3%, 2=3.4%, 4=11.2%, 8=71.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:24:05.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98555: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=290, BW=1162KiB/s (1189kB/s)(11.4MiB/10045msec) 00:24:05.764 slat (usec): min=5, max=4016, avg=12.64, stdev=74.65 00:24:05.764 clat (msec): min=23, max=119, avg=55.01, stdev=16.18 00:24:05.764 lat (msec): min=23, max=119, avg=55.02, stdev=16.18 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 43], 00:24:05.764 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 52], 60.00th=[ 58], 00:24:05.764 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 84], 00:24:05.764 | 99.00th=[ 96], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:24:05.764 | 99.99th=[ 121] 00:24:05.764 bw ( KiB/s): min= 896, max= 1584, per=4.45%, avg=1160.40, stdev=183.60, samples=20 00:24:05.764 iops : min= 224, max= 396, avg=290.10, stdev=45.90, samples=20 00:24:05.764 lat (msec) : 50=45.97%, 100=53.48%, 250=0.55% 00:24:05.764 cpu : usr=39.91%, sys=0.35%, ctx=1140, majf=0, minf=9 00:24:05.764 IO depths : 1=1.1%, 2=2.7%, 4=10.3%, 8=73.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:24:05.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98556: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=261, BW=1045KiB/s (1070kB/s)(10.2MiB/10020msec) 00:24:05.764 slat (usec): min=3, max=5153, avg=19.21, stdev=153.82 00:24:05.764 clat (msec): min=25, max=126, avg=61.08, stdev=16.92 00:24:05.764 lat (msec): min=25, max=126, avg=61.10, stdev=16.92 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 47], 00:24:05.764 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 64], 00:24:05.764 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 91], 00:24:05.764 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 127], 99.95th=[ 127], 00:24:05.764 | 99.99th=[ 127] 00:24:05.764 bw ( KiB/s): min= 640, max= 1328, per=3.99%, avg=1039.16, stdev=162.51, samples=19 00:24:05.764 iops : min= 160, max= 332, avg=259.79, stdev=40.63, samples=19 00:24:05.764 lat (msec) : 50=31.97%, 100=65.13%, 250=2.90% 00:24:05.764 cpu : usr=44.03%, sys=0.38%, ctx=1319, majf=0, minf=9 00:24:05.764 IO depths : 1=2.2%, 2=4.8%, 4=13.6%, 8=67.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:24:05.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 complete : 0=0.0%, 4=91.3%, 8=4.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98557: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=295, BW=1183KiB/s (1211kB/s)(11.6MiB/10011msec) 00:24:05.764 slat (usec): min=4, max=5027, avg=16.61, stdev=149.85 00:24:05.764 clat (msec): min=5, max=125, avg=53.98, stdev=20.90 00:24:05.764 lat (msec): min=5, max=125, avg=54.00, stdev=20.90 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 36], 00:24:05.764 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 55], 00:24:05.764 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:24:05.764 | 99.00th=[ 107], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 126], 00:24:05.764 | 99.99th=[ 126] 00:24:05.764 bw ( KiB/s): min= 768, max= 1792, per=4.52%, avg=1177.85, stdev=295.69, samples=20 00:24:05.764 iops : min= 192, max= 448, avg=294.45, stdev=73.93, samples=20 00:24:05.764 lat (msec) : 10=1.62%, 50=49.61%, 100=45.66%, 250=3.11% 00:24:05.764 cpu : usr=45.09%, sys=0.22%, ctx=1352, majf=0, minf=9 00:24:05.764 IO depths : 1=1.5%, 2=3.6%, 4=12.7%, 8=70.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:24:05.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98558: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10044msec) 00:24:05.764 slat (usec): min=5, max=8021, avg=19.43, stdev=203.67 00:24:05.764 clat (msec): min=21, max=107, avg=59.05, stdev=16.45 00:24:05.764 lat (msec): min=21, max=107, avg=59.07, stdev=16.44 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:24:05.764 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 63], 00:24:05.764 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 82], 95.00th=[ 90], 00:24:05.764 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 108], 99.95th=[ 108], 00:24:05.764 | 99.99th=[ 108] 00:24:05.764 bw ( KiB/s): min= 816, max= 1456, per=4.14%, avg=1079.60, stdev=175.66, samples=20 00:24:05.764 iops : min= 204, max= 364, avg=269.90, stdev=43.92, samples=20 00:24:05.764 lat (msec) : 50=39.26%, 100=59.56%, 250=1.18% 00:24:05.764 cpu : usr=40.34%, sys=0.29%, ctx=1202, majf=0, minf=9 00:24:05.764 IO depths : 1=2.2%, 2=5.0%, 4=14.1%, 8=67.8%, 16=10.8%, 32=0.0%, >=64=0.0% 00:24:05.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98559: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=242, BW=970KiB/s (993kB/s)(9732KiB/10032msec) 00:24:05.764 slat (usec): min=4, max=3828, avg=12.67, stdev=77.79 00:24:05.764 clat (msec): min=24, max=132, avg=65.83, stdev=20.14 00:24:05.764 lat (msec): min=24, max=132, avg=65.84, stdev=20.14 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:24:05.764 | 30.00th=[ 50], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 71], 00:24:05.764 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 102], 00:24:05.764 | 99.00th=[ 118], 99.50th=[ 125], 99.90th=[ 133], 99.95th=[ 133], 00:24:05.764 | 99.99th=[ 133] 00:24:05.764 bw ( KiB/s): min= 640, max= 1232, per=3.74%, avg=973.89, stdev=173.37, samples=19 00:24:05.764 iops : min= 160, max= 308, avg=243.47, stdev=43.34, samples=19 00:24:05.764 lat (msec) : 50=31.94%, 100=62.84%, 250=5.22% 00:24:05.764 cpu : usr=35.64%, sys=0.22%, ctx=1045, majf=0, minf=9 00:24:05.764 IO depths : 1=1.8%, 2=3.8%, 4=12.1%, 8=70.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:24:05.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 complete : 0=0.0%, 4=90.5%, 8=5.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.764 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.764 filename1: (groupid=0, jobs=1): err= 0: pid=98560: Thu Jul 25 07:38:36 2024 00:24:05.764 read: IOPS=294, BW=1179KiB/s (1207kB/s)(11.6MiB/10065msec) 00:24:05.764 slat (usec): min=5, max=8038, avg=17.07, stdev=180.86 00:24:05.764 clat (msec): min=4, max=120, avg=54.08, stdev=20.06 00:24:05.764 lat (msec): min=4, max=120, avg=54.10, stdev=20.06 00:24:05.764 clat percentiles (msec): 00:24:05.764 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 36], 00:24:05.764 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 58], 00:24:05.765 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 88], 00:24:05.765 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 121], 00:24:05.765 | 99.99th=[ 121] 00:24:05.765 bw ( KiB/s): min= 808, max= 1968, per=4.53%, avg=1179.90, stdev=319.82, samples=20 00:24:05.765 iops : min= 202, max= 492, avg=294.95, stdev=79.95, samples=20 00:24:05.765 lat (msec) : 10=1.62%, 20=0.54%, 50=43.12%, 100=52.70%, 250=2.02% 00:24:05.765 cpu : usr=44.32%, sys=0.39%, ctx=1283, majf=0, minf=9 00:24:05.765 IO depths : 1=1.4%, 2=3.1%, 4=12.1%, 8=71.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=90.0%, 8=4.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.765 filename2: (groupid=0, jobs=1): err= 0: pid=98561: Thu Jul 25 07:38:36 2024 00:24:05.765 read: IOPS=277, BW=1109KiB/s (1135kB/s)(10.9MiB/10035msec) 00:24:05.765 slat (usec): min=5, max=8020, avg=16.17, stdev=214.86 00:24:05.765 clat (msec): min=21, max=111, avg=57.63, stdev=18.33 00:24:05.765 lat (msec): min=21, max=111, avg=57.65, stdev=18.34 00:24:05.765 clat percentiles (msec): 00:24:05.765 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 43], 00:24:05.765 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 61], 00:24:05.765 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 93], 00:24:05.765 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 112], 99.95th=[ 112], 00:24:05.765 | 99.99th=[ 112] 00:24:05.765 bw ( KiB/s): min= 768, max= 1456, per=4.25%, avg=1106.00, stdev=191.93, samples=20 00:24:05.765 iops : min= 192, max= 364, avg=276.50, stdev=47.98, samples=20 00:24:05.765 lat (msec) : 50=45.38%, 100=52.50%, 250=2.12% 00:24:05.765 cpu : usr=32.81%, sys=0.21%, ctx=923, majf=0, minf=9 00:24:05.765 IO depths : 1=0.8%, 2=2.1%, 4=9.3%, 8=75.0%, 16=12.9%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.765 filename2: (groupid=0, jobs=1): err= 0: pid=98562: Thu Jul 25 07:38:36 2024 00:24:05.765 read: IOPS=263, BW=1053KiB/s (1078kB/s)(10.3MiB/10033msec) 00:24:05.765 slat (usec): min=2, max=8024, avg=16.67, stdev=156.39 00:24:05.765 clat (msec): min=20, max=131, avg=60.63, stdev=19.11 00:24:05.765 lat (msec): min=20, max=131, avg=60.65, stdev=19.12 00:24:05.765 clat percentiles (msec): 00:24:05.765 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 47], 00:24:05.765 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 59], 60.00th=[ 62], 00:24:05.765 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:24:05.765 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:24:05.765 | 99.99th=[ 132] 00:24:05.765 bw ( KiB/s): min= 640, max= 1424, per=4.03%, avg=1049.60, stdev=168.68, samples=20 00:24:05.765 iops : min= 160, max= 356, avg=262.40, stdev=42.17, samples=20 00:24:05.765 lat (msec) : 50=40.72%, 100=56.17%, 250=3.11% 00:24:05.765 cpu : usr=32.95%, sys=0.25%, ctx=916, majf=0, minf=9 00:24:05.765 IO depths : 1=1.5%, 2=3.4%, 4=11.3%, 8=71.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.765 filename2: (groupid=0, jobs=1): err= 0: pid=98563: Thu Jul 25 07:38:36 2024 00:24:05.765 read: IOPS=276, BW=1105KiB/s (1132kB/s)(10.8MiB/10017msec) 00:24:05.765 slat (usec): min=5, max=8052, avg=25.04, stdev=304.41 00:24:05.765 clat (msec): min=5, max=118, avg=57.73, stdev=18.48 00:24:05.765 lat (msec): min=5, max=118, avg=57.76, stdev=18.48 00:24:05.765 clat percentiles (msec): 00:24:05.765 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 46], 00:24:05.765 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 58], 60.00th=[ 61], 00:24:05.765 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:24:05.765 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 120], 00:24:05.765 | 99.99th=[ 120] 00:24:05.765 bw ( KiB/s): min= 816, max= 1664, per=4.24%, avg=1103.05, stdev=194.99, samples=20 00:24:05.765 iops : min= 204, max= 416, avg=275.75, stdev=48.75, samples=20 00:24:05.765 lat (msec) : 10=1.41%, 20=0.33%, 50=42.59%, 100=53.65%, 250=2.02% 00:24:05.765 cpu : usr=33.05%, sys=0.20%, ctx=917, majf=0, minf=9 00:24:05.765 IO depths : 1=1.4%, 2=3.3%, 4=11.0%, 8=72.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.765 filename2: (groupid=0, jobs=1): err= 0: pid=98564: Thu Jul 25 07:38:36 2024 00:24:05.765 read: IOPS=295, BW=1184KiB/s (1212kB/s)(11.6MiB/10038msec) 00:24:05.765 slat (usec): min=5, max=10024, avg=17.11, stdev=210.56 00:24:05.765 clat (msec): min=14, max=122, avg=53.82, stdev=18.11 00:24:05.765 lat (msec): min=14, max=122, avg=53.84, stdev=18.12 00:24:05.765 clat percentiles (msec): 00:24:05.765 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 37], 00:24:05.765 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 57], 00:24:05.765 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 78], 95.00th=[ 84], 00:24:05.765 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 123], 99.95th=[ 123], 00:24:05.765 | 99.99th=[ 123] 00:24:05.765 bw ( KiB/s): min= 784, max= 1712, per=4.55%, avg=1184.80, stdev=255.34, samples=20 00:24:05.765 iops : min= 196, max= 428, avg=296.20, stdev=63.83, samples=20 00:24:05.765 lat (msec) : 20=0.54%, 50=46.85%, 100=50.99%, 250=1.62% 00:24:05.765 cpu : usr=43.09%, sys=0.34%, ctx=1460, majf=0, minf=9 00:24:05.765 IO depths : 1=1.1%, 2=2.6%, 4=10.2%, 8=73.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.765 filename2: (groupid=0, jobs=1): err= 0: pid=98565: Thu Jul 25 07:38:36 2024 00:24:05.765 read: IOPS=274, BW=1100KiB/s (1126kB/s)(10.8MiB/10038msec) 00:24:05.765 slat (usec): min=5, max=8025, avg=17.72, stdev=187.22 00:24:05.765 clat (msec): min=21, max=113, avg=58.00, stdev=17.76 00:24:05.765 lat (msec): min=21, max=113, avg=58.02, stdev=17.76 00:24:05.765 clat percentiles (msec): 00:24:05.765 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 45], 00:24:05.765 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 61], 00:24:05.765 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 91], 00:24:05.765 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 114], 00:24:05.765 | 99.99th=[ 114] 00:24:05.765 bw ( KiB/s): min= 896, max= 1288, per=4.21%, avg=1097.70, stdev=132.61, samples=20 00:24:05.765 iops : min= 224, max= 322, avg=274.40, stdev=33.12, samples=20 00:24:05.765 lat (msec) : 50=45.58%, 100=52.57%, 250=1.85% 00:24:05.765 cpu : usr=45.14%, sys=0.24%, ctx=1348, majf=0, minf=9 00:24:05.765 IO depths : 1=2.4%, 2=5.1%, 4=14.5%, 8=67.1%, 16=10.9%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=91.5%, 8=3.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.765 filename2: (groupid=0, jobs=1): err= 0: pid=98566: Thu Jul 25 07:38:36 2024 00:24:05.765 read: IOPS=253, BW=1014KiB/s (1039kB/s)(9.93MiB/10028msec) 00:24:05.765 slat (usec): min=3, max=4032, avg=19.06, stdev=166.88 00:24:05.765 clat (msec): min=24, max=124, avg=62.96, stdev=17.42 00:24:05.765 lat (msec): min=24, max=124, avg=62.98, stdev=17.42 00:24:05.765 clat percentiles (msec): 00:24:05.765 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:24:05.765 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 68], 00:24:05.765 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 84], 95.00th=[ 95], 00:24:05.765 | 99.00th=[ 118], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 125], 00:24:05.765 | 99.99th=[ 125] 00:24:05.765 bw ( KiB/s): min= 640, max= 1256, per=3.88%, avg=1010.90, stdev=162.31, samples=20 00:24:05.765 iops : min= 160, max= 314, avg=252.70, stdev=40.56, samples=20 00:24:05.765 lat (msec) : 50=29.26%, 100=68.46%, 250=2.28% 00:24:05.765 cpu : usr=42.26%, sys=0.32%, ctx=1415, majf=0, minf=9 00:24:05.765 IO depths : 1=2.4%, 2=5.3%, 4=15.0%, 8=66.6%, 16=10.7%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.765 filename2: (groupid=0, jobs=1): err= 0: pid=98567: Thu Jul 25 07:38:36 2024 00:24:05.765 read: IOPS=271, BW=1084KiB/s (1110kB/s)(10.6MiB/10044msec) 00:24:05.765 slat (usec): min=5, max=8024, avg=18.94, stdev=230.43 00:24:05.765 clat (msec): min=23, max=124, avg=58.88, stdev=21.28 00:24:05.765 lat (msec): min=23, max=124, avg=58.90, stdev=21.28 00:24:05.765 clat percentiles (msec): 00:24:05.765 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 40], 00:24:05.765 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 61], 00:24:05.765 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 90], 95.00th=[ 96], 00:24:05.765 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 125], 00:24:05.765 | 99.99th=[ 125] 00:24:05.765 bw ( KiB/s): min= 688, max= 1472, per=4.16%, avg=1084.50, stdev=264.44, samples=20 00:24:05.765 iops : min= 172, max= 368, avg=271.10, stdev=66.07, samples=20 00:24:05.765 lat (msec) : 50=45.15%, 100=51.47%, 250=3.38% 00:24:05.765 cpu : usr=35.37%, sys=0.38%, ctx=1088, majf=0, minf=9 00:24:05.765 IO depths : 1=0.6%, 2=1.6%, 4=8.0%, 8=76.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:24:05.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.765 issued rwts: total=2722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.766 filename2: (groupid=0, jobs=1): err= 0: pid=98568: Thu Jul 25 07:38:36 2024 00:24:05.766 read: IOPS=268, BW=1076KiB/s (1102kB/s)(10.5MiB/10040msec) 00:24:05.766 slat (usec): min=5, max=8031, avg=22.16, stdev=214.02 00:24:05.766 clat (msec): min=23, max=118, avg=59.36, stdev=17.79 00:24:05.766 lat (msec): min=23, max=119, avg=59.38, stdev=17.79 00:24:05.766 clat percentiles (msec): 00:24:05.766 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 46], 00:24:05.766 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 65], 00:24:05.766 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 82], 95.00th=[ 92], 00:24:05.766 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 120], 99.95th=[ 120], 00:24:05.766 | 99.99th=[ 120] 00:24:05.766 bw ( KiB/s): min= 768, max= 1432, per=4.12%, avg=1073.60, stdev=194.92, samples=20 00:24:05.766 iops : min= 192, max= 358, avg=268.40, stdev=48.73, samples=20 00:24:05.766 lat (msec) : 50=37.44%, 100=60.85%, 250=1.70% 00:24:05.766 cpu : usr=44.65%, sys=0.48%, ctx=1301, majf=0, minf=9 00:24:05.766 IO depths : 1=2.4%, 2=5.9%, 4=16.3%, 8=64.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:24:05.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.766 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.766 issued rwts: total=2700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.766 00:24:05.766 Run status group 0 (all jobs): 00:24:05.766 READ: bw=25.4MiB/s (26.7MB/s), 970KiB/s-1233KiB/s (993kB/s-1263kB/s), io=256MiB (268MB), run=10011-10065msec 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 bdev_null0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 [2024-07-25 07:38:37.184651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 bdev_null1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.766 { 00:24:05.766 "params": { 00:24:05.766 "name": "Nvme$subsystem", 00:24:05.766 "trtype": "$TEST_TRANSPORT", 00:24:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.766 "adrfam": "ipv4", 00:24:05.766 "trsvcid": "$NVMF_PORT", 00:24:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.766 "hdgst": ${hdgst:-false}, 00:24:05.766 "ddgst": ${ddgst:-false} 00:24:05.766 }, 00:24:05.766 "method": "bdev_nvme_attach_controller" 00:24:05.766 } 00:24:05.766 EOF 00:24:05.766 )") 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:05.766 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.767 { 00:24:05.767 "params": { 00:24:05.767 "name": "Nvme$subsystem", 00:24:05.767 "trtype": "$TEST_TRANSPORT", 00:24:05.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.767 "adrfam": "ipv4", 00:24:05.767 "trsvcid": "$NVMF_PORT", 00:24:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.767 "hdgst": ${hdgst:-false}, 00:24:05.767 "ddgst": ${ddgst:-false} 00:24:05.767 }, 00:24:05.767 "method": "bdev_nvme_attach_controller" 00:24:05.767 } 00:24:05.767 EOF 00:24:05.767 )") 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:05.767 "params": { 00:24:05.767 "name": "Nvme0", 00:24:05.767 "trtype": "tcp", 00:24:05.767 "traddr": "10.0.0.2", 00:24:05.767 "adrfam": "ipv4", 00:24:05.767 "trsvcid": "4420", 00:24:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:05.767 "hdgst": false, 00:24:05.767 "ddgst": false 00:24:05.767 }, 00:24:05.767 "method": "bdev_nvme_attach_controller" 00:24:05.767 },{ 00:24:05.767 "params": { 00:24:05.767 "name": "Nvme1", 00:24:05.767 "trtype": "tcp", 00:24:05.767 "traddr": "10.0.0.2", 00:24:05.767 "adrfam": "ipv4", 00:24:05.767 "trsvcid": "4420", 00:24:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.767 "hdgst": false, 00:24:05.767 "ddgst": false 00:24:05.767 }, 00:24:05.767 "method": "bdev_nvme_attach_controller" 00:24:05.767 }' 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:05.767 07:38:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:05.767 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:05.767 ... 00:24:05.767 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:05.767 ... 00:24:05.767 fio-3.35 00:24:05.767 Starting 4 threads 00:24:11.040 00:24:11.040 filename0: (groupid=0, jobs=1): err= 0: pid=98705: Thu Jul 25 07:38:43 2024 00:24:11.040 read: IOPS=2662, BW=20.8MiB/s (21.8MB/s)(104MiB/5003msec) 00:24:11.040 slat (nsec): min=5472, max=75279, avg=8069.97, stdev=5181.14 00:24:11.040 clat (usec): min=725, max=5430, avg=2962.01, stdev=131.37 00:24:11.040 lat (usec): min=749, max=5457, avg=2970.08, stdev=131.08 00:24:11.040 clat percentiles (usec): 00:24:11.040 | 1.00th=[ 2835], 5.00th=[ 2900], 10.00th=[ 2900], 20.00th=[ 2933], 00:24:11.040 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2966], 00:24:11.040 | 70.00th=[ 2999], 80.00th=[ 2999], 90.00th=[ 3032], 95.00th=[ 3064], 00:24:11.040 | 99.00th=[ 3130], 99.50th=[ 3163], 99.90th=[ 3523], 99.95th=[ 5211], 00:24:11.040 | 99.99th=[ 5407] 00:24:11.040 bw ( KiB/s): min=21120, max=21504, per=25.03%, avg=21299.20, stdev=106.08, samples=10 00:24:11.040 iops : min= 2640, max= 2688, avg=2662.40, stdev=13.26, samples=10 00:24:11.040 lat (usec) : 750=0.02%, 1000=0.13% 00:24:11.040 lat (msec) : 2=0.16%, 4=99.64%, 10=0.06% 00:24:11.040 cpu : usr=96.64%, sys=2.48%, ctx=53, majf=0, minf=0 00:24:11.040 IO depths : 1=9.6%, 2=24.9%, 4=50.1%, 8=15.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.040 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.040 issued rwts: total=13320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.040 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.040 filename0: (groupid=0, jobs=1): err= 0: pid=98706: Thu Jul 25 07:38:43 2024 00:24:11.040 read: IOPS=2660, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:24:11.040 slat (nsec): min=5516, max=59959, avg=13063.30, stdev=6965.64 00:24:11.040 clat (usec): min=805, max=4678, avg=2951.37, stdev=167.36 00:24:11.040 lat (usec): min=816, max=4700, avg=2964.43, stdev=166.03 00:24:11.040 clat percentiles (usec): 00:24:11.040 | 1.00th=[ 2311], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:24:11.040 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:24:11.040 | 70.00th=[ 2999], 80.00th=[ 2999], 90.00th=[ 3032], 95.00th=[ 3064], 00:24:11.040 | 99.00th=[ 3589], 99.50th=[ 3621], 99.90th=[ 4228], 99.95th=[ 4293], 00:24:11.040 | 99.99th=[ 4686] 00:24:11.040 bw ( KiB/s): min=21120, max=21376, per=25.00%, avg=21277.80, stdev=94.54, samples=10 00:24:11.040 iops : min= 2640, max= 2672, avg=2659.70, stdev=11.85, samples=10 00:24:11.040 lat (usec) : 1000=0.01% 00:24:11.040 lat (msec) : 2=0.19%, 4=99.65%, 10=0.16% 00:24:11.040 cpu : usr=96.98%, sys=2.18%, ctx=7, majf=0, minf=9 00:24:11.040 IO depths : 1=8.9%, 2=23.8%, 4=51.2%, 8=16.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.040 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.040 issued rwts: total=13304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.040 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.040 filename1: (groupid=0, jobs=1): err= 0: pid=98707: Thu Jul 25 07:38:43 2024 00:24:11.040 read: IOPS=2658, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:24:11.040 slat (nsec): min=5469, max=75475, avg=14300.94, stdev=11933.06 00:24:11.040 clat (usec): min=1053, max=5774, avg=2940.47, stdev=150.81 00:24:11.040 lat (usec): min=1062, max=5812, avg=2954.77, stdev=150.15 00:24:11.040 clat percentiles (usec): 00:24:11.040 | 1.00th=[ 2507], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2900], 00:24:11.040 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:24:11.040 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 2999], 95.00th=[ 3032], 00:24:11.040 | 99.00th=[ 3490], 99.50th=[ 3589], 99.90th=[ 3916], 99.95th=[ 5211], 00:24:11.040 | 99.99th=[ 5276] 00:24:11.040 bw ( KiB/s): min=21120, max=21360, per=24.98%, avg=21260.44, stdev=90.43, samples=9 00:24:11.040 iops : min= 2640, max= 2670, avg=2657.56, stdev=11.30, samples=9 00:24:11.040 lat (msec) : 2=0.03%, 4=99.89%, 10=0.08% 00:24:11.040 cpu : usr=95.78%, sys=2.78%, ctx=106, majf=0, minf=9 00:24:11.040 IO depths : 1=0.7%, 2=25.0%, 4=50.0%, 8=24.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.041 complete : 0=0.0%, 4=89.9%, 8=10.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.041 issued rwts: total=13296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.041 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.041 filename1: (groupid=0, jobs=1): err= 0: pid=98708: Thu Jul 25 07:38:43 2024 00:24:11.041 read: IOPS=2659, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:24:11.041 slat (nsec): min=5571, max=75419, avg=11489.25, stdev=6413.09 00:24:11.041 clat (usec): min=1237, max=4711, avg=2960.69, stdev=154.20 00:24:11.041 lat (usec): min=1258, max=4737, avg=2972.18, stdev=152.49 00:24:11.041 clat percentiles (usec): 00:24:11.041 | 1.00th=[ 2278], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:24:11.041 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:24:11.041 | 70.00th=[ 2999], 80.00th=[ 2999], 90.00th=[ 3032], 95.00th=[ 3064], 00:24:11.041 | 99.00th=[ 3621], 99.50th=[ 3654], 99.90th=[ 3687], 99.95th=[ 3720], 00:24:11.041 | 99.99th=[ 4293] 00:24:11.041 bw ( KiB/s): min=21120, max=21376, per=25.00%, avg=21280.50, stdev=90.35, samples=10 00:24:11.041 iops : min= 2640, max= 2672, avg=2660.00, stdev=11.35, samples=10 00:24:11.041 lat (msec) : 2=0.02%, 4=99.97%, 10=0.02% 00:24:11.041 cpu : usr=97.06%, sys=2.04%, ctx=10, majf=0, minf=0 00:24:11.041 IO depths : 1=9.0%, 2=25.0%, 4=50.0%, 8=16.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.041 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.041 issued rwts: total=13304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.041 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.041 00:24:11.041 Run status group 0 (all jobs): 00:24:11.041 READ: bw=83.1MiB/s (87.1MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=416MiB (436MB), run=5001-5003msec 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 00:24:11.041 real 0m24.120s 00:24:11.041 user 2m10.256s 00:24:11.041 sys 0m2.787s 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 ************************************ 00:24:11.041 END TEST fio_dif_rand_params 00:24:11.041 ************************************ 00:24:11.041 07:38:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:11.041 07:38:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:11.041 07:38:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 ************************************ 00:24:11.041 START TEST fio_dif_digest 00:24:11.041 ************************************ 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 bdev_null0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.041 [2024-07-25 07:38:43.618398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:11.041 { 00:24:11.041 "params": { 00:24:11.041 "name": "Nvme$subsystem", 00:24:11.041 "trtype": "$TEST_TRANSPORT", 00:24:11.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.041 "adrfam": "ipv4", 00:24:11.041 "trsvcid": "$NVMF_PORT", 00:24:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.041 "hdgst": ${hdgst:-false}, 00:24:11.041 "ddgst": ${ddgst:-false} 00:24:11.041 }, 00:24:11.041 "method": "bdev_nvme_attach_controller" 00:24:11.041 } 00:24:11.041 EOF 00:24:11.041 )") 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:11.041 "params": { 00:24:11.041 "name": "Nvme0", 00:24:11.041 "trtype": "tcp", 00:24:11.041 "traddr": "10.0.0.2", 00:24:11.041 "adrfam": "ipv4", 00:24:11.041 "trsvcid": "4420", 00:24:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:11.041 "hdgst": true, 00:24:11.041 "ddgst": true 00:24:11.041 }, 00:24:11.041 "method": "bdev_nvme_attach_controller" 00:24:11.041 }' 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:11.041 07:38:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.300 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:11.300 ... 00:24:11.300 fio-3.35 00:24:11.300 Starting 3 threads 00:24:23.511 00:24:23.511 filename0: (groupid=0, jobs=1): err= 0: pid=98814: Thu Jul 25 07:38:54 2024 00:24:23.511 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(342MiB/10005msec) 00:24:23.511 slat (nsec): min=5871, max=50944, avg=12684.56, stdev=5470.56 00:24:23.511 clat (usec): min=5447, max=91716, avg=10959.90, stdev=8436.34 00:24:23.511 lat (usec): min=5469, max=91723, avg=10972.58, stdev=8436.32 00:24:23.511 clat percentiles (usec): 00:24:23.511 | 1.00th=[ 7439], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:24:23.511 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:24:23.511 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10683], 00:24:23.511 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[90702], 00:24:23.511 | 99.99th=[91751] 00:24:23.511 bw ( KiB/s): min=27136, max=41472, per=32.35%, avg=34869.89, stdev=4236.33, samples=19 00:24:23.511 iops : min= 212, max= 324, avg=272.42, stdev=33.10, samples=19 00:24:23.511 lat (msec) : 10=85.34%, 20=10.46%, 50=1.97%, 100=2.23% 00:24:23.511 cpu : usr=96.62%, sys=2.42%, ctx=9, majf=0, minf=0 00:24:23.511 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:23.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.511 issued rwts: total=2735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.511 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:23.511 filename0: (groupid=0, jobs=1): err= 0: pid=98815: Thu Jul 25 07:38:54 2024 00:24:23.511 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(319MiB/10003msec) 00:24:23.511 slat (nsec): min=5808, max=52571, avg=15782.00, stdev=12131.25 00:24:23.511 clat (usec): min=7136, max=14376, avg=11711.35, stdev=2127.14 00:24:23.511 lat (usec): min=7142, max=14409, avg=11727.13, stdev=2127.34 00:24:23.511 clat percentiles (usec): 00:24:23.511 | 1.00th=[ 7373], 5.00th=[ 7635], 10.00th=[ 7832], 20.00th=[ 8586], 00:24:23.511 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:24:23.511 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:24:23.511 | 99.00th=[14222], 99.50th=[14222], 99.90th=[14353], 99.95th=[14353], 00:24:23.511 | 99.99th=[14353] 00:24:23.511 bw ( KiB/s): min=29242, max=36096, per=30.42%, avg=32784.53, stdev=1961.39, samples=19 00:24:23.511 iops : min= 228, max= 282, avg=256.11, stdev=15.37, samples=19 00:24:23.511 lat (msec) : 10=23.85%, 20=76.15% 00:24:23.511 cpu : usr=94.99%, sys=3.53%, ctx=123, majf=0, minf=9 00:24:23.511 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:23.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.511 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.511 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:23.511 filename0: (groupid=0, jobs=1): err= 0: pid=98816: Thu Jul 25 07:38:54 2024 00:24:23.511 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(392MiB/10005msec) 00:24:23.511 slat (nsec): min=5850, max=58473, avg=12375.27, stdev=5784.79 00:24:23.511 clat (usec): min=4941, max=51864, avg=9555.03, stdev=2133.54 00:24:23.511 lat (usec): min=4948, max=51871, avg=9567.40, stdev=2133.05 00:24:23.511 clat percentiles (usec): 00:24:23.511 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7308], 00:24:23.511 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:24:23.511 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:24:23.511 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13304], 99.95th=[51119], 00:24:23.511 | 99.99th=[51643] 00:24:23.511 bw ( KiB/s): min=35584, max=44544, per=37.30%, avg=40195.79, stdev=2535.04, samples=19 00:24:23.511 iops : min= 278, max= 348, avg=314.00, stdev=19.86, samples=19 00:24:23.511 lat (msec) : 10=46.52%, 20=53.38%, 100=0.10% 00:24:23.511 cpu : usr=95.95%, sys=2.98%, ctx=22, majf=0, minf=9 00:24:23.511 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:23.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.511 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.511 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:23.511 00:24:23.511 Run status group 0 (all jobs): 00:24:23.511 READ: bw=105MiB/s (110MB/s), 31.9MiB/s-39.2MiB/s (33.5MB/s-41.1MB/s), io=1053MiB (1104MB), run=10003-10005msec 00:24:23.511 07:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.512 00:24:23.512 real 0m11.144s 00:24:23.512 user 0m29.532s 00:24:23.512 sys 0m1.261s 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:23.512 07:38:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.512 ************************************ 00:24:23.512 END TEST fio_dif_digest 00:24:23.512 ************************************ 00:24:23.512 07:38:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:23.512 07:38:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.512 rmmod nvme_tcp 00:24:23.512 rmmod nvme_fabrics 00:24:23.512 rmmod nvme_keyring 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 98030 ']' 00:24:23.512 07:38:54 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 98030 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 98030 ']' 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 98030 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98030 00:24:23.512 killing process with pid 98030 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98030' 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@967 -- # kill 98030 00:24:23.512 07:38:54 nvmf_dif -- common/autotest_common.sh@972 -- # wait 98030 00:24:23.512 07:38:55 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:23.512 07:38:55 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:23.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:23.512 Waiting for block devices as requested 00:24:23.512 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:23.512 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:23.512 07:38:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.512 07:38:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.512 07:38:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.512 07:38:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.512 07:38:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.512 07:38:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:23.512 07:38:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.512 07:38:56 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:23.512 00:24:23.512 real 1m1.115s 00:24:23.512 user 3m58.892s 00:24:23.512 sys 0m10.690s 00:24:23.512 07:38:56 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:23.512 07:38:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:23.512 ************************************ 00:24:23.512 END TEST nvmf_dif 00:24:23.512 ************************************ 00:24:23.512 07:38:56 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:23.512 07:38:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:23.512 07:38:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:23.512 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.512 ************************************ 00:24:23.512 START TEST nvmf_abort_qd_sizes 00:24:23.512 ************************************ 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:23.512 * Looking for test storage... 00:24:23.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.512 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:23.771 Cannot find device "nvmf_tgt_br" 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:23.771 Cannot find device "nvmf_tgt_br2" 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:23.771 Cannot find device "nvmf_tgt_br" 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:23.771 Cannot find device "nvmf_tgt_br2" 00:24:23.771 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:23.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:23.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:23.772 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:24.030 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:24.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:24:24.030 00:24:24.030 --- 10.0.0.2 ping statistics --- 00:24:24.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.031 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:24:24.031 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:24.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:24.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:24:24.031 00:24:24.031 --- 10.0.0.3 ping statistics --- 00:24:24.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.031 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:24.031 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:24.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:24:24.031 00:24:24.031 --- 10.0.0.1 ping statistics --- 00:24:24.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.031 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:24.031 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.031 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:24:24.031 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:24.031 07:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:24.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:24.968 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:24.968 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:24.968 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.968 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.968 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.968 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.968 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.968 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:25.227 07:38:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:25.227 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.227 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.227 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:25.227 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99413 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99413 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99413 ']' 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.228 07:38:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:25.228 [2024-07-25 07:38:57.796286] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:24:25.228 [2024-07-25 07:38:57.796344] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.228 [2024-07-25 07:38:57.935294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:25.487 [2024-07-25 07:38:58.050001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.487 [2024-07-25 07:38:58.050058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.487 [2024-07-25 07:38:58.050064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.487 [2024-07-25 07:38:58.050069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.487 [2024-07-25 07:38:58.050073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.487 [2024-07-25 07:38:58.050310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.487 [2024-07-25 07:38:58.051571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.487 [2024-07-25 07:38:58.051479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.487 [2024-07-25 07:38:58.051575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.055 07:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.056 07:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:26.056 ************************************ 00:24:26.056 START TEST spdk_target_abort 00:24:26.056 ************************************ 00:24:26.056 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:24:26.056 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:26.056 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:26.056 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.056 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:26.315 spdk_targetn1 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:26.315 [2024-07-25 07:38:58.835269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.315 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:26.316 [2024-07-25 07:38:58.875412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:26.316 07:38:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:29.607 Initializing NVMe Controllers 00:24:29.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:29.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:29.608 Initialization complete. Launching workers. 00:24:29.608 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14885, failed: 0 00:24:29.608 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1168, failed to submit 13717 00:24:29.608 success 726, unsuccess 442, failed 0 00:24:29.608 07:39:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:29.608 07:39:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:32.900 Initializing NVMe Controllers 00:24:32.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:32.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:32.900 Initialization complete. Launching workers. 00:24:32.900 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5905, failed: 0 00:24:32.900 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 4680 00:24:32.900 success 260, unsuccess 965, failed 0 00:24:32.900 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:32.900 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:36.192 Initializing NVMe Controllers 00:24:36.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:36.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:36.192 Initialization complete. Launching workers. 00:24:36.192 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36121, failed: 0 00:24:36.192 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2768, failed to submit 33353 00:24:36.192 success 592, unsuccess 2176, failed 0 00:24:36.192 07:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:36.192 07:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.192 07:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:36.192 07:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.192 07:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:36.192 07:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.192 07:39:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99413 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99413 ']' 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99413 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99413 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99413' 00:24:39.521 killing process with pid 99413 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99413 00:24:39.521 07:39:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99413 00:24:39.521 00:24:39.521 real 0m13.484s 00:24:39.521 user 0m54.502s 00:24:39.521 sys 0m1.463s 00:24:39.521 07:39:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.521 07:39:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.521 ************************************ 00:24:39.521 END TEST spdk_target_abort 00:24:39.521 ************************************ 00:24:39.780 07:39:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:39.780 07:39:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:39.780 07:39:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.780 07:39:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.780 ************************************ 00:24:39.780 START TEST kernel_target_abort 00:24:39.780 ************************************ 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:39.780 07:39:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:40.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:40.348 Waiting for block devices as requested 00:24:40.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.348 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:40.348 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:40.607 No valid GPT data, bailing 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n2 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:40.607 No valid GPT data, bailing 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n3 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:40.607 No valid GPT data, bailing 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:40.607 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:40.608 No valid GPT data, bailing 00:24:40.608 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:40.867 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b --hostid=e7ba0731-437e-4daf-b47d-a61e85dc561b -a 10.0.0.1 -t tcp -s 4420 00:24:40.867 00:24:40.867 Discovery Log Number of Records 2, Generation counter 2 00:24:40.867 =====Discovery Log Entry 0====== 00:24:40.867 trtype: tcp 00:24:40.867 adrfam: ipv4 00:24:40.867 subtype: current discovery subsystem 00:24:40.867 treq: not specified, sq flow control disable supported 00:24:40.867 portid: 1 00:24:40.867 trsvcid: 4420 00:24:40.867 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:40.867 traddr: 10.0.0.1 00:24:40.868 eflags: none 00:24:40.868 sectype: none 00:24:40.868 =====Discovery Log Entry 1====== 00:24:40.868 trtype: tcp 00:24:40.868 adrfam: ipv4 00:24:40.868 subtype: nvme subsystem 00:24:40.868 treq: not specified, sq flow control disable supported 00:24:40.868 portid: 1 00:24:40.868 trsvcid: 4420 00:24:40.868 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:40.868 traddr: 10.0.0.1 00:24:40.868 eflags: none 00:24:40.868 sectype: none 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.868 07:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:44.159 Initializing NVMe Controllers 00:24:44.159 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:44.160 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:44.160 Initialization complete. Launching workers. 00:24:44.160 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 46471, failed: 0 00:24:44.160 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 46471, failed to submit 0 00:24:44.160 success 0, unsuccess 46471, failed 0 00:24:44.160 07:39:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:44.160 07:39:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:47.451 Initializing NVMe Controllers 00:24:47.451 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:47.451 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:47.451 Initialization complete. Launching workers. 00:24:47.451 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101769, failed: 0 00:24:47.451 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44697, failed to submit 57072 00:24:47.451 success 0, unsuccess 44697, failed 0 00:24:47.451 07:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:47.451 07:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:50.743 Initializing NVMe Controllers 00:24:50.743 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:50.743 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:50.743 Initialization complete. Launching workers. 00:24:50.744 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 123642, failed: 0 00:24:50.744 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30948, failed to submit 92694 00:24:50.744 success 0, unsuccess 30948, failed 0 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:50.744 07:39:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:50.744 07:39:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:51.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:59.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:59.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:59.692 00:24:59.692 real 0m19.904s 00:24:59.692 user 0m6.787s 00:24:59.692 sys 0m10.810s 00:24:59.692 07:39:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:59.692 07:39:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:59.692 ************************************ 00:24:59.692 END TEST kernel_target_abort 00:24:59.692 ************************************ 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.692 rmmod nvme_tcp 00:24:59.692 rmmod nvme_fabrics 00:24:59.692 rmmod nvme_keyring 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99413 ']' 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99413 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99413 ']' 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99413 00:24:59.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99413) - No such process 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99413 is not found' 00:24:59.692 Process with pid 99413 is not found 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:59.692 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:00.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.259 Waiting for block devices as requested 00:25:00.259 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:00.519 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:00.519 00:25:00.519 real 0m37.046s 00:25:00.519 user 1m2.420s 00:25:00.519 sys 0m14.178s 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.519 07:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:00.519 ************************************ 00:25:00.519 END TEST nvmf_abort_qd_sizes 00:25:00.519 ************************************ 00:25:00.519 07:39:33 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:00.519 07:39:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:00.519 07:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.519 07:39:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.519 ************************************ 00:25:00.519 START TEST keyring_file 00:25:00.519 ************************************ 00:25:00.519 07:39:33 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:00.777 * Looking for test storage... 00:25:00.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.777 07:39:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.777 07:39:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.777 07:39:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.777 07:39:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.777 07:39:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.777 07:39:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.777 07:39:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:00.777 07:39:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.k9OH6JMbW7 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.k9OH6JMbW7 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.k9OH6JMbW7 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.k9OH6JMbW7 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dhpxoGrYL9 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:00.777 07:39:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dhpxoGrYL9 00:25:00.777 07:39:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dhpxoGrYL9 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dhpxoGrYL9 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=100419 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.777 07:39:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100419 00:25:00.777 07:39:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100419 ']' 00:25:00.777 07:39:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.777 07:39:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.777 07:39:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.777 07:39:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.777 07:39:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:01.036 [2024-07-25 07:39:33.548556] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:01.036 [2024-07-25 07:39:33.548632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100419 ] 00:25:01.036 [2024-07-25 07:39:33.684549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.294 [2024-07-25 07:39:33.794704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:25:01.863 07:39:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:01.863 [2024-07-25 07:39:34.382803] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.863 null0 00:25:01.863 [2024-07-25 07:39:34.414720] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.863 [2024-07-25 07:39:34.414922] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:01.863 [2024-07-25 07:39:34.422694] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.863 07:39:34 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:01.863 [2024-07-25 07:39:34.438652] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:01.863 2024/07/25 07:39:34 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:25:01.863 request: 00:25:01.863 { 00:25:01.863 "method": "nvmf_subsystem_add_listener", 00:25:01.863 "params": { 00:25:01.863 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:01.863 "secure_channel": false, 00:25:01.863 "listen_address": { 00:25:01.863 "trtype": "tcp", 00:25:01.863 "traddr": "127.0.0.1", 00:25:01.863 "trsvcid": "4420" 00:25:01.863 } 00:25:01.863 } 00:25:01.863 } 00:25:01.863 Got JSON-RPC error response 00:25:01.863 GoRPCClient: error on JSON-RPC call 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:01.863 07:39:34 keyring_file -- keyring/file.sh@46 -- # bperfpid=100454 00:25:01.863 07:39:34 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:01.863 07:39:34 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100454 /var/tmp/bperf.sock 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100454 ']' 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.863 07:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:01.863 [2024-07-25 07:39:34.499287] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:01.863 [2024-07-25 07:39:34.499359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100454 ] 00:25:02.122 [2024-07-25 07:39:34.634858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.122 [2024-07-25 07:39:34.756875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.690 07:39:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.690 07:39:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:25:02.690 07:39:35 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:02.690 07:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:02.949 07:39:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dhpxoGrYL9 00:25:02.949 07:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dhpxoGrYL9 00:25:03.209 07:39:35 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:25:03.209 07:39:35 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:25:03.209 07:39:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.209 07:39:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.209 07:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.209 07:39:35 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.k9OH6JMbW7 == \/\t\m\p\/\t\m\p\.\k\9\O\H\6\J\M\b\W\7 ]] 00:25:03.209 07:39:35 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:25:03.209 07:39:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:03.209 07:39:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.209 07:39:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.209 07:39:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.468 07:39:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dhpxoGrYL9 == \/\t\m\p\/\t\m\p\.\d\h\p\x\o\G\r\Y\L\9 ]] 00:25:03.468 07:39:36 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:25:03.468 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.468 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.468 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.468 07:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.468 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.728 07:39:36 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:25:03.728 07:39:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:25:03.728 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.728 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.728 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.728 07:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.728 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.728 07:39:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:03.728 07:39:36 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:03.728 07:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:03.988 [2024-07-25 07:39:36.603495] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.988 nvme0n1 00:25:03.988 07:39:36 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:25:03.988 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.988 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.988 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.988 07:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.988 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:04.247 07:39:36 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:25:04.247 07:39:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:25:04.247 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:04.247 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:04.247 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:04.247 07:39:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.247 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:04.506 07:39:37 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:25:04.506 07:39:37 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:04.506 Running I/O for 1 seconds... 00:25:05.446 00:25:05.446 Latency(us) 00:25:05.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.446 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:05.446 nvme0n1 : 1.00 19980.61 78.05 0.00 0.00 6392.28 3334.04 15797.32 00:25:05.446 =================================================================================================================== 00:25:05.446 Total : 19980.61 78.05 0.00 0.00 6392.28 3334.04 15797.32 00:25:05.446 0 00:25:05.705 07:39:38 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:05.705 07:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:05.705 07:39:38 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:25:05.705 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.705 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:05.705 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.705 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.705 07:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.965 07:39:38 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:25:05.965 07:39:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:25:05.965 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:05.965 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.965 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:05.965 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.965 07:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.225 07:39:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:06.225 07:39:38 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:06.225 07:39:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:06.225 07:39:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:06.225 07:39:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:06.225 07:39:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.225 07:39:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:06.225 07:39:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.225 07:39:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:06.225 07:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:06.225 [2024-07-25 07:39:38.950157] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:06.225 [2024-07-25 07:39:38.950858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1398f30 (107): Transport endpoint is not connected 00:25:06.225 [2024-07-25 07:39:38.951847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1398f30 (9): Bad file descriptor 00:25:06.225 [2024-07-25 07:39:38.952844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:06.225 [2024-07-25 07:39:38.952863] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:06.225 [2024-07-25 07:39:38.952870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:06.225 2024/07/25 07:39:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:06.225 request: 00:25:06.225 { 00:25:06.225 "method": "bdev_nvme_attach_controller", 00:25:06.225 "params": { 00:25:06.225 "name": "nvme0", 00:25:06.225 "trtype": "tcp", 00:25:06.225 "traddr": "127.0.0.1", 00:25:06.225 "adrfam": "ipv4", 00:25:06.225 "trsvcid": "4420", 00:25:06.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:06.225 "prchk_reftag": false, 00:25:06.225 "prchk_guard": false, 00:25:06.225 "hdgst": false, 00:25:06.225 "ddgst": false, 00:25:06.225 "psk": "key1" 00:25:06.225 } 00:25:06.225 } 00:25:06.225 Got JSON-RPC error response 00:25:06.225 GoRPCClient: error on JSON-RPC call 00:25:06.485 07:39:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:06.485 07:39:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.485 07:39:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.485 07:39:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.485 07:39:38 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:25:06.485 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.485 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.485 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.485 07:39:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.485 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.485 07:39:39 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:25:06.485 07:39:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:25:06.485 07:39:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:06.485 07:39:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.485 07:39:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.485 07:39:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.485 07:39:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:06.745 07:39:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:06.745 07:39:39 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:25:06.745 07:39:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:07.006 07:39:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:25:07.006 07:39:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:07.265 07:39:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:25:07.265 07:39:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.265 07:39:39 keyring_file -- keyring/file.sh@77 -- # jq length 00:25:07.265 07:39:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:25:07.265 07:39:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.k9OH6JMbW7 00:25:07.265 07:39:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:07.265 07:39:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:07.265 07:39:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:07.265 07:39:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:07.265 07:39:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.265 07:39:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:07.265 07:39:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.266 07:39:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:07.266 07:39:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:07.524 [2024-07-25 07:39:40.125004] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.k9OH6JMbW7': 0100660 00:25:07.524 [2024-07-25 07:39:40.125039] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:07.524 2024/07/25 07:39:40 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.k9OH6JMbW7], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:25:07.524 request: 00:25:07.524 { 00:25:07.524 "method": "keyring_file_add_key", 00:25:07.524 "params": { 00:25:07.524 "name": "key0", 00:25:07.524 "path": "/tmp/tmp.k9OH6JMbW7" 00:25:07.524 } 00:25:07.524 } 00:25:07.524 Got JSON-RPC error response 00:25:07.524 GoRPCClient: error on JSON-RPC call 00:25:07.524 07:39:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:07.524 07:39:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.524 07:39:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.524 07:39:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.524 07:39:40 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.k9OH6JMbW7 00:25:07.524 07:39:40 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:07.524 07:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.k9OH6JMbW7 00:25:07.784 07:39:40 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.k9OH6JMbW7 00:25:07.784 07:39:40 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:25:07.784 07:39:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.784 07:39:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:07.784 07:39:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.784 07:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.784 07:39:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:08.044 07:39:40 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:25:08.044 07:39:40 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.044 07:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.044 [2024-07-25 07:39:40.720001] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.k9OH6JMbW7': No such file or directory 00:25:08.044 [2024-07-25 07:39:40.720039] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:08.044 [2024-07-25 07:39:40.720060] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:08.044 [2024-07-25 07:39:40.720066] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:08.044 [2024-07-25 07:39:40.720074] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:08.044 2024/07/25 07:39:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:25:08.044 request: 00:25:08.044 { 00:25:08.044 "method": "bdev_nvme_attach_controller", 00:25:08.044 "params": { 00:25:08.044 "name": "nvme0", 00:25:08.044 "trtype": "tcp", 00:25:08.044 "traddr": "127.0.0.1", 00:25:08.044 "adrfam": "ipv4", 00:25:08.044 "trsvcid": "4420", 00:25:08.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:08.044 "prchk_reftag": false, 00:25:08.044 "prchk_guard": false, 00:25:08.044 "hdgst": false, 00:25:08.044 "ddgst": false, 00:25:08.044 "psk": "key0" 00:25:08.044 } 00:25:08.044 } 00:25:08.044 Got JSON-RPC error response 00:25:08.044 GoRPCClient: error on JSON-RPC call 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:08.044 07:39:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:08.044 07:39:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:25:08.044 07:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:08.304 07:39:40 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pRHKYdwFAP 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:08.304 07:39:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:08.304 07:39:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:08.304 07:39:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:08.304 07:39:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:08.304 07:39:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:08.304 07:39:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pRHKYdwFAP 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pRHKYdwFAP 00:25:08.304 07:39:40 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.pRHKYdwFAP 00:25:08.304 07:39:40 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pRHKYdwFAP 00:25:08.304 07:39:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pRHKYdwFAP 00:25:08.564 07:39:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.564 07:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.824 nvme0n1 00:25:08.824 07:39:41 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:25:08.824 07:39:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:08.824 07:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:08.824 07:39:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:08.824 07:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:08.824 07:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.083 07:39:41 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:25:09.083 07:39:41 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:25:09.083 07:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:09.343 07:39:41 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:25:09.343 07:39:41 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:25:09.343 07:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.343 07:39:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.343 07:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.343 07:39:42 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:25:09.343 07:39:42 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:25:09.343 07:39:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:09.343 07:39:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.343 07:39:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.343 07:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.343 07:39:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.602 07:39:42 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:25:09.602 07:39:42 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:09.602 07:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:09.860 07:39:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:25:09.861 07:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.861 07:39:42 keyring_file -- keyring/file.sh@104 -- # jq length 00:25:10.120 07:39:42 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:25:10.120 07:39:42 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pRHKYdwFAP 00:25:10.120 07:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pRHKYdwFAP 00:25:10.120 07:39:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dhpxoGrYL9 00:25:10.120 07:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dhpxoGrYL9 00:25:10.381 07:39:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:10.381 07:39:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:10.654 nvme0n1 00:25:10.654 07:39:43 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:25:10.654 07:39:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:10.926 07:39:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:25:10.926 "subsystems": [ 00:25:10.926 { 00:25:10.926 "subsystem": "keyring", 00:25:10.926 "config": [ 00:25:10.926 { 00:25:10.926 "method": "keyring_file_add_key", 00:25:10.926 "params": { 00:25:10.926 "name": "key0", 00:25:10.926 "path": "/tmp/tmp.pRHKYdwFAP" 00:25:10.926 } 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "method": "keyring_file_add_key", 00:25:10.926 "params": { 00:25:10.926 "name": "key1", 00:25:10.926 "path": "/tmp/tmp.dhpxoGrYL9" 00:25:10.926 } 00:25:10.926 } 00:25:10.926 ] 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "subsystem": "iobuf", 00:25:10.926 "config": [ 00:25:10.926 { 00:25:10.926 "method": "iobuf_set_options", 00:25:10.926 "params": { 00:25:10.926 "large_bufsize": 135168, 00:25:10.926 "large_pool_count": 1024, 00:25:10.926 "small_bufsize": 8192, 00:25:10.926 "small_pool_count": 8192 00:25:10.926 } 00:25:10.926 } 00:25:10.926 ] 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "subsystem": "sock", 00:25:10.926 "config": [ 00:25:10.926 { 00:25:10.926 "method": "sock_set_default_impl", 00:25:10.926 "params": { 00:25:10.926 "impl_name": "posix" 00:25:10.926 } 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "method": "sock_impl_set_options", 00:25:10.926 "params": { 00:25:10.926 "enable_ktls": false, 00:25:10.926 "enable_placement_id": 0, 00:25:10.926 "enable_quickack": false, 00:25:10.926 "enable_recv_pipe": true, 00:25:10.926 "enable_zerocopy_send_client": false, 00:25:10.926 "enable_zerocopy_send_server": true, 00:25:10.926 "impl_name": "ssl", 00:25:10.926 "recv_buf_size": 4096, 00:25:10.926 "send_buf_size": 4096, 00:25:10.926 "tls_version": 0, 00:25:10.926 "zerocopy_threshold": 0 00:25:10.926 } 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "method": "sock_impl_set_options", 00:25:10.926 "params": { 00:25:10.926 "enable_ktls": false, 00:25:10.926 "enable_placement_id": 0, 00:25:10.926 "enable_quickack": false, 00:25:10.926 "enable_recv_pipe": true, 00:25:10.926 "enable_zerocopy_send_client": false, 00:25:10.926 "enable_zerocopy_send_server": true, 00:25:10.926 "impl_name": "posix", 00:25:10.926 "recv_buf_size": 2097152, 00:25:10.926 "send_buf_size": 2097152, 00:25:10.926 "tls_version": 0, 00:25:10.926 "zerocopy_threshold": 0 00:25:10.926 } 00:25:10.926 } 00:25:10.926 ] 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "subsystem": "vmd", 00:25:10.926 "config": [] 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "subsystem": "accel", 00:25:10.926 "config": [ 00:25:10.926 { 00:25:10.926 "method": "accel_set_options", 00:25:10.926 "params": { 00:25:10.926 "buf_count": 2048, 00:25:10.926 "large_cache_size": 16, 00:25:10.926 "sequence_count": 2048, 00:25:10.926 "small_cache_size": 128, 00:25:10.926 "task_count": 2048 00:25:10.926 } 00:25:10.926 } 00:25:10.926 ] 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "subsystem": "bdev", 00:25:10.926 "config": [ 00:25:10.926 { 00:25:10.926 "method": "bdev_set_options", 00:25:10.926 "params": { 00:25:10.926 "bdev_auto_examine": true, 00:25:10.926 "bdev_io_cache_size": 256, 00:25:10.926 "bdev_io_pool_size": 65535, 00:25:10.926 "iobuf_large_cache_size": 16, 00:25:10.926 "iobuf_small_cache_size": 128 00:25:10.926 } 00:25:10.926 }, 00:25:10.926 { 00:25:10.926 "method": "bdev_raid_set_options", 00:25:10.926 "params": { 00:25:10.926 "process_max_bandwidth_mb_sec": 0, 00:25:10.926 "process_window_size_kb": 1024 00:25:10.926 } 00:25:10.926 }, 00:25:10.926 { 00:25:10.927 "method": "bdev_iscsi_set_options", 00:25:10.927 "params": { 00:25:10.927 "timeout_sec": 30 00:25:10.927 } 00:25:10.927 }, 00:25:10.927 { 00:25:10.927 "method": "bdev_nvme_set_options", 00:25:10.927 "params": { 00:25:10.927 "action_on_timeout": "none", 00:25:10.927 "allow_accel_sequence": false, 00:25:10.927 "arbitration_burst": 0, 00:25:10.927 "bdev_retry_count": 3, 00:25:10.927 "ctrlr_loss_timeout_sec": 0, 00:25:10.927 "delay_cmd_submit": true, 00:25:10.927 "dhchap_dhgroups": [ 00:25:10.927 "null", 00:25:10.927 "ffdhe2048", 00:25:10.927 "ffdhe3072", 00:25:10.927 "ffdhe4096", 00:25:10.927 "ffdhe6144", 00:25:10.927 "ffdhe8192" 00:25:10.927 ], 00:25:10.927 "dhchap_digests": [ 00:25:10.927 "sha256", 00:25:10.927 "sha384", 00:25:10.927 "sha512" 00:25:10.927 ], 00:25:10.927 "disable_auto_failback": false, 00:25:10.927 "fast_io_fail_timeout_sec": 0, 00:25:10.927 "generate_uuids": false, 00:25:10.927 "high_priority_weight": 0, 00:25:10.927 "io_path_stat": false, 00:25:10.927 "io_queue_requests": 512, 00:25:10.927 "keep_alive_timeout_ms": 10000, 00:25:10.927 "low_priority_weight": 0, 00:25:10.927 "medium_priority_weight": 0, 00:25:10.927 "nvme_adminq_poll_period_us": 10000, 00:25:10.927 "nvme_error_stat": false, 00:25:10.927 "nvme_ioq_poll_period_us": 0, 00:25:10.927 "rdma_cm_event_timeout_ms": 0, 00:25:10.927 "rdma_max_cq_size": 0, 00:25:10.927 "rdma_srq_size": 0, 00:25:10.927 "reconnect_delay_sec": 0, 00:25:10.927 "timeout_admin_us": 0, 00:25:10.927 "timeout_us": 0, 00:25:10.927 "transport_ack_timeout": 0, 00:25:10.927 "transport_retry_count": 4, 00:25:10.927 "transport_tos": 0 00:25:10.927 } 00:25:10.927 }, 00:25:10.927 { 00:25:10.927 "method": "bdev_nvme_attach_controller", 00:25:10.927 "params": { 00:25:10.927 "adrfam": "IPv4", 00:25:10.927 "ctrlr_loss_timeout_sec": 0, 00:25:10.927 "ddgst": false, 00:25:10.927 "fast_io_fail_timeout_sec": 0, 00:25:10.927 "hdgst": false, 00:25:10.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:10.927 "name": "nvme0", 00:25:10.927 "prchk_guard": false, 00:25:10.927 "prchk_reftag": false, 00:25:10.927 "psk": "key0", 00:25:10.927 "reconnect_delay_sec": 0, 00:25:10.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.927 "traddr": "127.0.0.1", 00:25:10.927 "trsvcid": "4420", 00:25:10.927 "trtype": "TCP" 00:25:10.927 } 00:25:10.927 }, 00:25:10.927 { 00:25:10.927 "method": "bdev_nvme_set_hotplug", 00:25:10.927 "params": { 00:25:10.927 "enable": false, 00:25:10.927 "period_us": 100000 00:25:10.927 } 00:25:10.927 }, 00:25:10.927 { 00:25:10.927 "method": "bdev_wait_for_examine" 00:25:10.927 } 00:25:10.927 ] 00:25:10.927 }, 00:25:10.927 { 00:25:10.927 "subsystem": "nbd", 00:25:10.927 "config": [] 00:25:10.927 } 00:25:10.927 ] 00:25:10.927 }' 00:25:10.927 07:39:43 keyring_file -- keyring/file.sh@114 -- # killprocess 100454 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100454 ']' 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100454 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100454 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:10.927 killing process with pid 100454 00:25:10.927 Received shutdown signal, test time was about 1.000000 seconds 00:25:10.927 00:25:10.927 Latency(us) 00:25:10.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.927 =================================================================================================================== 00:25:10.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100454' 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@967 -- # kill 100454 00:25:10.927 07:39:43 keyring_file -- common/autotest_common.sh@972 -- # wait 100454 00:25:11.187 07:39:43 keyring_file -- keyring/file.sh@117 -- # bperfpid=100905 00:25:11.187 07:39:43 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100905 /var/tmp/bperf.sock 00:25:11.187 07:39:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100905 ']' 00:25:11.187 07:39:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:11.187 07:39:43 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:11.187 07:39:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.187 07:39:43 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:25:11.187 "subsystems": [ 00:25:11.187 { 00:25:11.187 "subsystem": "keyring", 00:25:11.187 "config": [ 00:25:11.187 { 00:25:11.187 "method": "keyring_file_add_key", 00:25:11.187 "params": { 00:25:11.187 "name": "key0", 00:25:11.187 "path": "/tmp/tmp.pRHKYdwFAP" 00:25:11.187 } 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "method": "keyring_file_add_key", 00:25:11.187 "params": { 00:25:11.187 "name": "key1", 00:25:11.187 "path": "/tmp/tmp.dhpxoGrYL9" 00:25:11.187 } 00:25:11.187 } 00:25:11.187 ] 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "subsystem": "iobuf", 00:25:11.187 "config": [ 00:25:11.187 { 00:25:11.187 "method": "iobuf_set_options", 00:25:11.187 "params": { 00:25:11.187 "large_bufsize": 135168, 00:25:11.187 "large_pool_count": 1024, 00:25:11.187 "small_bufsize": 8192, 00:25:11.187 "small_pool_count": 8192 00:25:11.187 } 00:25:11.187 } 00:25:11.187 ] 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "subsystem": "sock", 00:25:11.187 "config": [ 00:25:11.187 { 00:25:11.187 "method": "sock_set_default_impl", 00:25:11.187 "params": { 00:25:11.187 "impl_name": "posix" 00:25:11.187 } 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "method": "sock_impl_set_options", 00:25:11.187 "params": { 00:25:11.187 "enable_ktls": false, 00:25:11.187 "enable_placement_id": 0, 00:25:11.187 "enable_quickack": false, 00:25:11.187 "enable_recv_pipe": true, 00:25:11.187 "enable_zerocopy_send_client": false, 00:25:11.187 "enable_zerocopy_send_server": true, 00:25:11.187 "impl_name": "ssl", 00:25:11.187 "recv_buf_size": 4096, 00:25:11.187 "send_buf_size": 4096, 00:25:11.187 "tls_version": 0, 00:25:11.187 "zerocopy_threshold": 0 00:25:11.187 } 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "method": "sock_impl_set_options", 00:25:11.187 "params": { 00:25:11.187 "enable_ktls": false, 00:25:11.187 "enable_placement_id": 0, 00:25:11.187 "enable_quickack": false, 00:25:11.187 "enable_recv_pipe": true, 00:25:11.187 "enable_zerocopy_send_client": false, 00:25:11.187 "enable_zerocopy_send_server": true, 00:25:11.187 "impl_name": "posix", 00:25:11.187 "recv_buf_size": 2097152, 00:25:11.187 "send_buf_size": 2097152, 00:25:11.187 "tls_version": 0, 00:25:11.187 "zerocopy_threshold": 0 00:25:11.187 } 00:25:11.187 } 00:25:11.187 ] 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "subsystem": "vmd", 00:25:11.187 "config": [] 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "subsystem": "accel", 00:25:11.187 "config": [ 00:25:11.187 { 00:25:11.187 "method": "accel_set_options", 00:25:11.187 "params": { 00:25:11.187 "buf_count": 2048, 00:25:11.187 "large_cache_size": 16, 00:25:11.187 "sequence_count": 2048, 00:25:11.187 "small_cache_size": 128, 00:25:11.187 "task_count": 2048 00:25:11.187 } 00:25:11.187 } 00:25:11.187 ] 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "subsystem": "bdev", 00:25:11.187 "config": [ 00:25:11.187 { 00:25:11.187 "method": "bdev_set_options", 00:25:11.187 "params": { 00:25:11.187 "bdev_auto_examine": true, 00:25:11.187 "bdev_io_cache_size": 256, 00:25:11.187 "bdev_io_pool_size": 65535, 00:25:11.187 "iobuf_large_cache_size": 16, 00:25:11.187 "iobuf_small_cache_size": 128 00:25:11.187 } 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "method": "bdev_raid_set_options", 00:25:11.187 "params": { 00:25:11.187 "process_max_bandwidth_mb_sec": 0, 00:25:11.187 "process_window_size_kb": 1024 00:25:11.187 } 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "method": "bdev_iscsi_set_options", 00:25:11.187 "params": { 00:25:11.187 "timeout_sec": 30 00:25:11.187 } 00:25:11.187 }, 00:25:11.187 { 00:25:11.187 "method": "bdev_nvme_set_options", 00:25:11.187 "params": { 00:25:11.187 "action_on_timeout": "none", 00:25:11.187 "allow_accel_sequence": false, 00:25:11.187 "arbitration_burst": 0, 00:25:11.187 "bdev_retry_count": 3, 00:25:11.187 "ctrlr_loss_timeout_sec": 0, 00:25:11.187 "delay_cmd_submit": true, 00:25:11.187 "dhchap_dhgroups": [ 00:25:11.187 "null", 00:25:11.187 "ffdhe2048", 00:25:11.187 "ffdhe3072", 00:25:11.187 "ffdhe4096", 00:25:11.188 "ffdhe6144", 00:25:11.188 "ffdhe8192" 00:25:11.188 ], 00:25:11.188 "dhchap_digests": [ 00:25:11.188 "sha256", 00:25:11.188 "sha384", 00:25:11.188 "sha512" 00:25:11.188 ], 00:25:11.188 "disable_auto_failback": false, 00:25:11.188 "fast_io_fail_timeout_sec": 0, 00:25:11.188 "generate_uuids": false, 00:25:11.188 "high_priority_weight": 0, 00:25:11.188 "io_path_stat": false, 00:25:11.188 "io_queue_requests": 512, 00:25:11.188 "keep_alive_timeout_ms": 10000, 00:25:11.188 "low_priority_weight": 0, 00:25:11.188 "medium_priority_weight": 0, 00:25:11.188 "nvme_adminq_poll_period_us": 10000, 00:25:11.188 "nvme_error_stat": false, 00:25:11.188 "nvme_ioq_poll_period_us": 0, 00:25:11.188 "rdma_cm_event_timeout_ms": 0, 00:25:11.188 "rdma_max_cq_size": 0, 00:25:11.188 "rdma_srq_size": 0, 00:25:11.188 "reconnect_delay_sec": 0, 00:25:11.188 "timeout_admin_us": 0, 00:25:11.188 "timeout_us": 0, 00:25:11.188 "transport_ack_timeout": 0, 00:25:11.188 "transport_retry_count": 4, 00:25:11.188 "transport_tos": 0 00:25:11.188 } 00:25:11.188 }, 00:25:11.188 { 00:25:11.188 "method": "bdev_nvme_attach_controller", 00:25:11.188 "params": { 00:25:11.188 "adrfam": "IPv4", 00:25:11.188 "ctrlr_loss_timeout_sec": 0, 00:25:11.188 "ddgst": false, 00:25:11.188 "fast_io_fail_timeout_sec": 0, 00:25:11.188 "hdgst": false, 00:25:11.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:11.188 "name": "nvme0", 00:25:11.188 "prchk_guard": false, 00:25:11.188 "prchk_reftag": false, 00:25:11.188 "psk": "key0", 00:25:11.188 "reconnect_delay_sec": 0, 00:25:11.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.188 "traddr": "127.0.0.1", 00:25:11.188 "trsvcid": "4420", 00:25:11.188 "trtype": "TCP" 00:25:11.188 } 00:25:11.188 }, 00:25:11.188 { 00:25:11.188 "method": "bdev_nvme_set_hotplug", 00:25:11.188 "params": { 00:25:11.188 "enable": false, 00:25:11.188 "period_us": 100000 00:25:11.188 } 00:25:11.188 }, 00:25:11.188 { 00:25:11.188 "method": "bdev_wait_for_examine" 00:25:11.188 } 00:25:11.188 ] 00:25:11.188 }, 00:25:11.188 { 00:25:11.188 "subsystem": "nbd", 00:25:11.188 "config": [] 00:25:11.188 } 00:25:11.188 ] 00:25:11.188 }' 00:25:11.188 07:39:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:11.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:11.188 07:39:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.188 07:39:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:11.447 [2024-07-25 07:39:43.947971] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:11.447 [2024-07-25 07:39:43.948052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100905 ] 00:25:11.447 [2024-07-25 07:39:44.083330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.706 [2024-07-25 07:39:44.192752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.706 [2024-07-25 07:39:44.407647] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:12.274 07:39:44 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.274 07:39:44 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:25:12.274 07:39:44 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:25:12.274 07:39:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.274 07:39:44 keyring_file -- keyring/file.sh@120 -- # jq length 00:25:12.274 07:39:44 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:25:12.274 07:39:44 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:25:12.274 07:39:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:12.274 07:39:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.274 07:39:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.274 07:39:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.274 07:39:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:12.533 07:39:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:12.533 07:39:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:25:12.533 07:39:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.533 07:39:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:12.533 07:39:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.533 07:39:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:12.533 07:39:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.792 07:39:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:25:12.792 07:39:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:25:12.792 07:39:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:25:12.792 07:39:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:13.051 07:39:45 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:25:13.051 07:39:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:13.051 07:39:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pRHKYdwFAP /tmp/tmp.dhpxoGrYL9 00:25:13.051 07:39:45 keyring_file -- keyring/file.sh@20 -- # killprocess 100905 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100905 ']' 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100905 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100905 00:25:13.051 killing process with pid 100905 00:25:13.051 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.051 00:25:13.051 Latency(us) 00:25:13.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.051 =================================================================================================================== 00:25:13.051 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100905' 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@967 -- # kill 100905 00:25:13.051 07:39:45 keyring_file -- common/autotest_common.sh@972 -- # wait 100905 00:25:13.310 07:39:45 keyring_file -- keyring/file.sh@21 -- # killprocess 100419 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100419 ']' 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100419 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100419 00:25:13.310 killing process with pid 100419 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100419' 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@967 -- # kill 100419 00:25:13.310 [2024-07-25 07:39:45.968056] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:13.310 07:39:45 keyring_file -- common/autotest_common.sh@972 -- # wait 100419 00:25:13.878 00:25:13.878 real 0m13.343s 00:25:13.878 user 0m31.101s 00:25:13.878 sys 0m3.419s 00:25:13.878 07:39:46 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.878 07:39:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:13.878 ************************************ 00:25:13.878 END TEST keyring_file 00:25:13.878 ************************************ 00:25:13.878 07:39:46 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:25:13.878 07:39:46 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:13.878 07:39:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:13.878 07:39:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.878 07:39:46 -- common/autotest_common.sh@10 -- # set +x 00:25:13.878 ************************************ 00:25:13.878 START TEST keyring_linux 00:25:13.878 ************************************ 00:25:13.878 07:39:46 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:14.138 * Looking for test storage... 00:25:14.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7ba0731-437e-4daf-b47d-a61e85dc561b 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=e7ba0731-437e-4daf-b47d-a61e85dc561b 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.138 07:39:46 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.138 07:39:46 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.138 07:39:46 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.138 07:39:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.138 07:39:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.138 07:39:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.138 07:39:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:14.138 07:39:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:14.138 /tmp/:spdk-test:key0 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:25:14.138 07:39:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:14.138 /tmp/:spdk-test:key1 00:25:14.138 07:39:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=101059 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.138 07:39:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 101059 00:25:14.138 07:39:46 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101059 ']' 00:25:14.138 07:39:46 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.138 07:39:46 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.138 07:39:46 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.138 07:39:46 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.138 07:39:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:14.398 [2024-07-25 07:39:46.908391] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:14.398 [2024-07-25 07:39:46.908459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101059 ] 00:25:14.398 [2024-07-25 07:39:47.044014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.656 [2024-07-25 07:39:47.164996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:25:15.225 07:39:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:15.225 [2024-07-25 07:39:47.741586] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.225 null0 00:25:15.225 [2024-07-25 07:39:47.773509] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:15.225 [2024-07-25 07:39:47.773733] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.225 07:39:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:15.225 612395873 00:25:15.225 07:39:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:15.225 43328535 00:25:15.225 07:39:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=101090 00:25:15.225 07:39:47 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:15.225 07:39:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 101090 /var/tmp/bperf.sock 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101090 ']' 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.225 07:39:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:15.225 [2024-07-25 07:39:47.852619] Starting SPDK v24.09-pre git sha1 c0d54772e / DPDK 24.03.0 initialization... 00:25:15.225 [2024-07-25 07:39:47.852697] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101090 ] 00:25:15.484 [2024-07-25 07:39:47.991822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.484 [2024-07-25 07:39:48.120864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.052 07:39:48 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.052 07:39:48 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:25:16.052 07:39:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:16.052 07:39:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:16.311 07:39:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:16.311 07:39:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:16.569 07:39:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:16.569 07:39:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:16.828 [2024-07-25 07:39:49.408656] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:16.828 nvme0n1 00:25:16.828 07:39:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:16.828 07:39:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:16.828 07:39:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:16.828 07:39:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:16.828 07:39:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:16.828 07:39:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:17.088 07:39:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:17.088 07:39:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:17.088 07:39:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:17.088 07:39:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:17.088 07:39:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:17.088 07:39:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:17.088 07:39:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:17.345 07:39:49 keyring_linux -- keyring/linux.sh@25 -- # sn=612395873 00:25:17.345 07:39:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:17.345 07:39:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:17.345 07:39:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 612395873 == \6\1\2\3\9\5\8\7\3 ]] 00:25:17.345 07:39:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 612395873 00:25:17.345 07:39:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:17.345 07:39:49 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.345 Running I/O for 1 seconds... 00:25:18.722 00:25:18.722 Latency(us) 00:25:18.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.722 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:18.722 nvme0n1 : 1.01 21496.80 83.97 0.00 0.00 5932.85 2203.61 7211.82 00:25:18.722 =================================================================================================================== 00:25:18.722 Total : 21496.80 83.97 0.00 0.00 5932.85 2203.61 7211.82 00:25:18.722 0 00:25:18.722 07:39:51 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:18.722 07:39:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:18.722 07:39:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:18.722 07:39:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:18.722 07:39:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:18.722 07:39:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:18.722 07:39:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:18.722 07:39:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.982 07:39:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.982 [2024-07-25 07:39:51.638380] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:18.982 [2024-07-25 07:39:51.639046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1394ea0 (107): Transport endpoint is not connected 00:25:18.982 [2024-07-25 07:39:51.640033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1394ea0 (9): Bad file descriptor 00:25:18.982 [2024-07-25 07:39:51.641029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.982 [2024-07-25 07:39:51.641050] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:18.982 [2024-07-25 07:39:51.641057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.982 2024/07/25 07:39:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:18.982 request: 00:25:18.982 { 00:25:18.982 "method": "bdev_nvme_attach_controller", 00:25:18.982 "params": { 00:25:18.982 "name": "nvme0", 00:25:18.982 "trtype": "tcp", 00:25:18.982 "traddr": "127.0.0.1", 00:25:18.982 "adrfam": "ipv4", 00:25:18.982 "trsvcid": "4420", 00:25:18.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:18.982 "prchk_reftag": false, 00:25:18.982 "prchk_guard": false, 00:25:18.982 "hdgst": false, 00:25:18.982 "ddgst": false, 00:25:18.982 "psk": ":spdk-test:key1" 00:25:18.982 } 00:25:18.982 } 00:25:18.982 Got JSON-RPC error response 00:25:18.982 GoRPCClient: error on JSON-RPC call 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # sn=612395873 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 612395873 00:25:18.982 1 links removed 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # sn=43328535 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 43328535 00:25:18.982 1 links removed 00:25:18.982 07:39:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 101090 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101090 ']' 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101090 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.982 07:39:51 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101090 00:25:19.241 07:39:51 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:19.241 07:39:51 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:19.241 killing process with pid 101090 00:25:19.241 07:39:51 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101090' 00:25:19.241 07:39:51 keyring_linux -- common/autotest_common.sh@967 -- # kill 101090 00:25:19.241 Received shutdown signal, test time was about 1.000000 seconds 00:25:19.241 00:25:19.241 Latency(us) 00:25:19.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.241 =================================================================================================================== 00:25:19.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.241 07:39:51 keyring_linux -- common/autotest_common.sh@972 -- # wait 101090 00:25:19.500 07:39:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 101059 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101059 ']' 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101059 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101059 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:19.500 killing process with pid 101059 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101059' 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@967 -- # kill 101059 00:25:19.500 07:39:52 keyring_linux -- common/autotest_common.sh@972 -- # wait 101059 00:25:20.068 00:25:20.068 real 0m6.024s 00:25:20.068 user 0m10.696s 00:25:20.068 sys 0m1.818s 00:25:20.068 07:39:52 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:20.068 07:39:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:20.068 ************************************ 00:25:20.068 END TEST keyring_linux 00:25:20.068 ************************************ 00:25:20.068 07:39:52 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:20.068 07:39:52 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:20.068 07:39:52 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:20.068 07:39:52 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:20.068 07:39:52 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:20.068 07:39:52 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:20.068 07:39:52 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:20.068 07:39:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.068 07:39:52 -- common/autotest_common.sh@10 -- # set +x 00:25:20.068 07:39:52 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:20.068 07:39:52 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:25:20.068 07:39:52 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:25:20.068 07:39:52 -- common/autotest_common.sh@10 -- # set +x 00:25:22.600 INFO: APP EXITING 00:25:22.600 INFO: killing all VMs 00:25:22.600 INFO: killing vhost app 00:25:22.600 INFO: EXIT DONE 00:25:23.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:23.170 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:23.170 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:24.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.108 Cleaning 00:25:24.108 Removing: /var/run/dpdk/spdk0/config 00:25:24.108 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:24.108 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:24.108 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:24.108 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:24.108 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:24.108 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:24.108 Removing: /var/run/dpdk/spdk1/config 00:25:24.108 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:24.108 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:24.108 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:24.108 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:24.108 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:24.108 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:24.108 Removing: /var/run/dpdk/spdk2/config 00:25:24.108 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:24.108 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:24.108 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:24.108 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:24.108 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:24.367 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:24.367 Removing: /var/run/dpdk/spdk3/config 00:25:24.367 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:24.367 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:24.367 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:24.367 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:24.367 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:24.367 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:24.367 Removing: /var/run/dpdk/spdk4/config 00:25:24.367 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:24.367 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:24.367 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:24.367 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:24.367 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:24.367 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:24.367 Removing: /dev/shm/nvmf_trace.0 00:25:24.367 Removing: /dev/shm/spdk_tgt_trace.pid60713 00:25:24.367 Removing: /var/run/dpdk/spdk0 00:25:24.367 Removing: /var/run/dpdk/spdk1 00:25:24.367 Removing: /var/run/dpdk/spdk2 00:25:24.367 Removing: /var/run/dpdk/spdk3 00:25:24.367 Removing: /var/run/dpdk/spdk4 00:25:24.367 Removing: /var/run/dpdk/spdk_pid100419 00:25:24.367 Removing: /var/run/dpdk/spdk_pid100454 00:25:24.367 Removing: /var/run/dpdk/spdk_pid100905 00:25:24.367 Removing: /var/run/dpdk/spdk_pid101059 00:25:24.367 Removing: /var/run/dpdk/spdk_pid101090 00:25:24.367 Removing: /var/run/dpdk/spdk_pid60573 00:25:24.367 Removing: /var/run/dpdk/spdk_pid60713 00:25:24.367 Removing: /var/run/dpdk/spdk_pid60974 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61065 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61100 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61210 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61240 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61358 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61632 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61808 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61879 00:25:24.367 Removing: /var/run/dpdk/spdk_pid61971 00:25:24.367 Removing: /var/run/dpdk/spdk_pid62060 00:25:24.367 Removing: /var/run/dpdk/spdk_pid62099 00:25:24.367 Removing: /var/run/dpdk/spdk_pid62129 00:25:24.367 Removing: /var/run/dpdk/spdk_pid62196 00:25:24.367 Removing: /var/run/dpdk/spdk_pid62319 00:25:24.367 Removing: /var/run/dpdk/spdk_pid62934 00:25:24.367 Removing: /var/run/dpdk/spdk_pid62992 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63056 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63084 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63153 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63180 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63259 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63286 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63333 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63363 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63409 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63439 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63586 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63621 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63696 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63766 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63791 00:25:24.367 Removing: /var/run/dpdk/spdk_pid63849 00:25:24.626 Removing: /var/run/dpdk/spdk_pid63884 00:25:24.626 Removing: /var/run/dpdk/spdk_pid63918 00:25:24.626 Removing: /var/run/dpdk/spdk_pid63953 00:25:24.626 Removing: /var/run/dpdk/spdk_pid63987 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64022 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64051 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64091 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64120 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64160 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64189 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64229 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64258 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64296 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64329 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64359 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64398 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64430 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64475 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64504 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64546 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64612 00:25:24.626 Removing: /var/run/dpdk/spdk_pid64717 00:25:24.626 Removing: /var/run/dpdk/spdk_pid65153 00:25:24.626 Removing: /var/run/dpdk/spdk_pid65490 00:25:24.626 Removing: /var/run/dpdk/spdk_pid67929 00:25:24.626 Removing: /var/run/dpdk/spdk_pid67975 00:25:24.626 Removing: /var/run/dpdk/spdk_pid68298 00:25:24.626 Removing: /var/run/dpdk/spdk_pid68347 00:25:24.626 Removing: /var/run/dpdk/spdk_pid68697 00:25:24.626 Removing: /var/run/dpdk/spdk_pid69223 00:25:24.626 Removing: /var/run/dpdk/spdk_pid69661 00:25:24.626 Removing: /var/run/dpdk/spdk_pid70641 00:25:24.626 Removing: /var/run/dpdk/spdk_pid71615 00:25:24.626 Removing: /var/run/dpdk/spdk_pid71738 00:25:24.626 Removing: /var/run/dpdk/spdk_pid71802 00:25:24.626 Removing: /var/run/dpdk/spdk_pid73257 00:25:24.626 Removing: /var/run/dpdk/spdk_pid73542 00:25:24.626 Removing: /var/run/dpdk/spdk_pid76869 00:25:24.626 Removing: /var/run/dpdk/spdk_pid77252 00:25:24.626 Removing: /var/run/dpdk/spdk_pid77806 00:25:24.626 Removing: /var/run/dpdk/spdk_pid78215 00:25:24.626 Removing: /var/run/dpdk/spdk_pid83421 00:25:24.626 Removing: /var/run/dpdk/spdk_pid83853 00:25:24.627 Removing: /var/run/dpdk/spdk_pid83962 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84114 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84158 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84201 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84247 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84405 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84553 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84816 00:25:24.627 Removing: /var/run/dpdk/spdk_pid84932 00:25:24.627 Removing: /var/run/dpdk/spdk_pid85180 00:25:24.627 Removing: /var/run/dpdk/spdk_pid85305 00:25:24.627 Removing: /var/run/dpdk/spdk_pid85436 00:25:24.627 Removing: /var/run/dpdk/spdk_pid85785 00:25:24.627 Removing: /var/run/dpdk/spdk_pid86228 00:25:24.627 Removing: /var/run/dpdk/spdk_pid86536 00:25:24.627 Removing: /var/run/dpdk/spdk_pid87036 00:25:24.627 Removing: /var/run/dpdk/spdk_pid87038 00:25:24.627 Removing: /var/run/dpdk/spdk_pid87376 00:25:24.627 Removing: /var/run/dpdk/spdk_pid87396 00:25:24.886 Removing: /var/run/dpdk/spdk_pid87410 00:25:24.886 Removing: /var/run/dpdk/spdk_pid87435 00:25:24.886 Removing: /var/run/dpdk/spdk_pid87446 00:25:24.886 Removing: /var/run/dpdk/spdk_pid87808 00:25:24.886 Removing: /var/run/dpdk/spdk_pid87852 00:25:24.886 Removing: /var/run/dpdk/spdk_pid88184 00:25:24.886 Removing: /var/run/dpdk/spdk_pid88435 00:25:24.886 Removing: /var/run/dpdk/spdk_pid88921 00:25:24.886 Removing: /var/run/dpdk/spdk_pid89505 00:25:24.886 Removing: /var/run/dpdk/spdk_pid90800 00:25:24.886 Removing: /var/run/dpdk/spdk_pid91401 00:25:24.886 Removing: /var/run/dpdk/spdk_pid91409 00:25:24.886 Removing: /var/run/dpdk/spdk_pid93332 00:25:24.886 Removing: /var/run/dpdk/spdk_pid93420 00:25:24.886 Removing: /var/run/dpdk/spdk_pid93505 00:25:24.886 Removing: /var/run/dpdk/spdk_pid93595 00:25:24.886 Removing: /var/run/dpdk/spdk_pid93751 00:25:24.886 Removing: /var/run/dpdk/spdk_pid93837 00:25:24.886 Removing: /var/run/dpdk/spdk_pid93918 00:25:24.886 Removing: /var/run/dpdk/spdk_pid94008 00:25:24.886 Removing: /var/run/dpdk/spdk_pid94348 00:25:24.886 Removing: /var/run/dpdk/spdk_pid95037 00:25:24.886 Removing: /var/run/dpdk/spdk_pid96385 00:25:24.886 Removing: /var/run/dpdk/spdk_pid96590 00:25:24.886 Removing: /var/run/dpdk/spdk_pid96876 00:25:24.886 Removing: /var/run/dpdk/spdk_pid97174 00:25:24.886 Removing: /var/run/dpdk/spdk_pid97736 00:25:24.886 Removing: /var/run/dpdk/spdk_pid97747 00:25:24.886 Removing: /var/run/dpdk/spdk_pid98105 00:25:24.886 Removing: /var/run/dpdk/spdk_pid98268 00:25:24.886 Removing: /var/run/dpdk/spdk_pid98430 00:25:24.886 Removing: /var/run/dpdk/spdk_pid98530 00:25:24.886 Removing: /var/run/dpdk/spdk_pid98690 00:25:24.886 Removing: /var/run/dpdk/spdk_pid98809 00:25:24.886 Removing: /var/run/dpdk/spdk_pid99482 00:25:24.886 Removing: /var/run/dpdk/spdk_pid99523 00:25:24.886 Removing: /var/run/dpdk/spdk_pid99558 00:25:24.886 Removing: /var/run/dpdk/spdk_pid99839 00:25:24.886 Removing: /var/run/dpdk/spdk_pid99869 00:25:24.886 Removing: /var/run/dpdk/spdk_pid99906 00:25:24.886 Clean 00:25:24.886 07:39:57 -- common/autotest_common.sh@1449 -- # return 0 00:25:24.886 07:39:57 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:24.886 07:39:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.886 07:39:57 -- common/autotest_common.sh@10 -- # set +x 00:25:25.146 07:39:57 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:25.146 07:39:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.146 07:39:57 -- common/autotest_common.sh@10 -- # set +x 00:25:25.146 07:39:57 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:25.146 07:39:57 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:25.146 07:39:57 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:25.146 07:39:57 -- spdk/autotest.sh@391 -- # hash lcov 00:25:25.146 07:39:57 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:25.146 07:39:57 -- spdk/autotest.sh@393 -- # hostname 00:25:25.146 07:39:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:25.405 geninfo: WARNING: invalid characters removed from testname! 00:25:47.336 07:40:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:50.627 07:40:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:52.527 07:40:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.428 07:40:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:56.965 07:40:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:58.870 07:40:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:00.786 07:40:33 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:00.787 07:40:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:00.787 07:40:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:00.787 07:40:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.787 07:40:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.787 07:40:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.787 07:40:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.787 07:40:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.787 07:40:33 -- paths/export.sh@5 -- $ export PATH 00:26:00.787 07:40:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.787 07:40:33 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:00.787 07:40:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:26:00.787 07:40:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721893233.XXXXXX 00:26:00.787 07:40:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721893233.K66am1 00:26:00.787 07:40:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:26:00.787 07:40:33 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:26:00.787 07:40:33 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:00.787 07:40:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:00.787 07:40:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:00.787 07:40:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:26:00.787 07:40:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:26:00.787 07:40:33 -- common/autotest_common.sh@10 -- $ set +x 00:26:00.787 07:40:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:26:00.787 07:40:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:26:00.787 07:40:33 -- pm/common@17 -- $ local monitor 00:26:00.787 07:40:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:00.787 07:40:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:00.787 07:40:33 -- pm/common@25 -- $ sleep 1 00:26:00.787 07:40:33 -- pm/common@21 -- $ date +%s 00:26:00.787 07:40:33 -- pm/common@21 -- $ date +%s 00:26:00.787 07:40:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721893233 00:26:00.787 07:40:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721893233 00:26:00.787 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721893233_collect-vmstat.pm.log 00:26:00.787 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721893233_collect-cpu-load.pm.log 00:26:01.728 07:40:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:26:01.728 07:40:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:26:01.728 07:40:34 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:01.728 07:40:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:01.728 07:40:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:01.728 07:40:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:01.728 07:40:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:01.728 07:40:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:01.728 07:40:34 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:01.988 07:40:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:01.988 07:40:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:01.988 07:40:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:01.988 07:40:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:01.988 07:40:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:01.988 07:40:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:01.988 07:40:34 -- pm/common@44 -- $ pid=102834 00:26:01.988 07:40:34 -- pm/common@50 -- $ kill -TERM 102834 00:26:01.988 07:40:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:01.988 07:40:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:01.988 07:40:34 -- pm/common@44 -- $ pid=102836 00:26:01.988 07:40:34 -- pm/common@50 -- $ kill -TERM 102836 00:26:01.988 + [[ -n 5328 ]] 00:26:01.988 + sudo kill 5328 00:26:01.998 [Pipeline] } 00:26:02.017 [Pipeline] // timeout 00:26:02.023 [Pipeline] } 00:26:02.041 [Pipeline] // stage 00:26:02.047 [Pipeline] } 00:26:02.065 [Pipeline] // catchError 00:26:02.075 [Pipeline] stage 00:26:02.077 [Pipeline] { (Stop VM) 00:26:02.091 [Pipeline] sh 00:26:02.372 + vagrant halt 00:26:04.942 ==> default: Halting domain... 00:26:11.568 [Pipeline] sh 00:26:11.851 + vagrant destroy -f 00:26:14.388 ==> default: Removing domain... 00:26:14.401 [Pipeline] sh 00:26:14.684 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:26:14.692 [Pipeline] } 00:26:14.710 [Pipeline] // stage 00:26:14.715 [Pipeline] } 00:26:14.732 [Pipeline] // dir 00:26:14.737 [Pipeline] } 00:26:14.749 [Pipeline] // wrap 00:26:14.755 [Pipeline] } 00:26:14.766 [Pipeline] // catchError 00:26:14.774 [Pipeline] stage 00:26:14.777 [Pipeline] { (Epilogue) 00:26:14.792 [Pipeline] sh 00:26:15.074 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:20.363 [Pipeline] catchError 00:26:20.365 [Pipeline] { 00:26:20.379 [Pipeline] sh 00:26:20.661 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:20.662 Artifacts sizes are good 00:26:20.670 [Pipeline] } 00:26:20.687 [Pipeline] // catchError 00:26:20.697 [Pipeline] archiveArtifacts 00:26:20.704 Archiving artifacts 00:26:20.849 [Pipeline] cleanWs 00:26:20.858 [WS-CLEANUP] Deleting project workspace... 00:26:20.858 [WS-CLEANUP] Deferred wipeout is used... 00:26:20.862 [WS-CLEANUP] done 00:26:20.863 [Pipeline] } 00:26:20.874 [Pipeline] // stage 00:26:20.878 [Pipeline] } 00:26:20.890 [Pipeline] // node 00:26:20.893 [Pipeline] End of Pipeline 00:26:20.925 Finished: SUCCESS